00:00:00.000 Started by upstream project "autotest-nightly" build number 3920 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3295 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.136 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.137 The recommended git tool is: git 00:00:00.137 using credential 00000000-0000-0000-0000-000000000002 00:00:00.140 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.186 Fetching changes from the remote Git repository 00:00:00.193 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.234 Using shallow fetch with depth 1 00:00:00.234 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.234 > git --version # timeout=10 00:00:00.270 > git --version # 'git version 2.39.2' 00:00:00.270 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.287 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.287 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.832 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.842 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.853 Checking out Revision c396a3cd44e4090a57fb151c18fefbf4a9bd324b (FETCH_HEAD) 00:00:06.853 > git config core.sparsecheckout # timeout=10 00:00:06.862 > git read-tree -mu HEAD # timeout=10 00:00:06.880 > git checkout -f c396a3cd44e4090a57fb151c18fefbf4a9bd324b # timeout=5 00:00:06.902 Commit message: "jenkins/jjb-config: Use freebsd14 for the pkgdep-freebsd job" 00:00:06.902 > git rev-list --no-walk c396a3cd44e4090a57fb151c18fefbf4a9bd324b # timeout=10 00:00:07.003 [Pipeline] Start of Pipeline 00:00:07.017 [Pipeline] library 00:00:07.019 Loading library shm_lib@master 00:00:07.019 Library shm_lib@master is cached. Copying from home. 00:00:07.034 [Pipeline] node 00:00:07.045 Running on VM-host-SM17 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:07.046 [Pipeline] { 00:00:07.053 [Pipeline] catchError 00:00:07.055 [Pipeline] { 00:00:07.064 [Pipeline] wrap 00:00:07.070 [Pipeline] { 00:00:07.077 [Pipeline] stage 00:00:07.078 [Pipeline] { (Prologue) 00:00:07.094 [Pipeline] echo 00:00:07.095 Node: VM-host-SM17 00:00:07.100 [Pipeline] cleanWs 00:00:07.108 [WS-CLEANUP] Deleting project workspace... 00:00:07.108 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.114 [WS-CLEANUP] done 00:00:07.344 [Pipeline] setCustomBuildProperty 00:00:07.425 [Pipeline] httpRequest 00:00:07.443 [Pipeline] echo 00:00:07.445 Sorcerer 10.211.164.101 is alive 00:00:07.451 [Pipeline] httpRequest 00:00:07.454 HttpMethod: GET 00:00:07.455 URL: http://10.211.164.101/packages/jbp_c396a3cd44e4090a57fb151c18fefbf4a9bd324b.tar.gz 00:00:07.455 Sending request to url: http://10.211.164.101/packages/jbp_c396a3cd44e4090a57fb151c18fefbf4a9bd324b.tar.gz 00:00:07.475 Response Code: HTTP/1.1 200 OK 00:00:07.476 Success: Status code 200 is in the accepted range: 200,404 00:00:07.476 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_c396a3cd44e4090a57fb151c18fefbf4a9bd324b.tar.gz 00:00:29.325 [Pipeline] sh 00:00:29.606 + tar --no-same-owner -xf jbp_c396a3cd44e4090a57fb151c18fefbf4a9bd324b.tar.gz 00:00:29.622 [Pipeline] httpRequest 00:00:29.648 [Pipeline] echo 00:00:29.649 Sorcerer 10.211.164.101 is alive 00:00:29.658 [Pipeline] httpRequest 00:00:29.663 HttpMethod: GET 00:00:29.663 URL: http://10.211.164.101/packages/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:00:29.664 Sending request to url: http://10.211.164.101/packages/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:00:29.677 Response Code: HTTP/1.1 200 OK 00:00:29.677 Success: Status code 200 is in the accepted range: 200,404 00:00:29.677 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:02:37.835 [Pipeline] sh 00:02:38.115 + tar --no-same-owner -xf spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:02:41.411 [Pipeline] sh 00:02:41.690 + git -C spdk log --oneline -n5 00:02:41.690 704257090 lib/reduce: fix the incorrect calculation method for the number of io_unit required for metadata. 00:02:41.690 fc2398dfa raid: clear base bdev configure_cb after executing 00:02:41.690 5558f3f50 raid: complete bdev_raid_create after sb is written 00:02:41.690 d005e023b raid: fix empty slot not updated in sb after resize 00:02:41.690 f41dbc235 nvme: always specify CC_CSS_NVM when CAP_CSS_IOCS is not set 00:02:41.707 [Pipeline] writeFile 00:02:41.723 [Pipeline] sh 00:02:42.003 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:02:42.014 [Pipeline] sh 00:02:42.292 + cat autorun-spdk.conf 00:02:42.292 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:42.292 SPDK_TEST_NVMF=1 00:02:42.292 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:42.292 SPDK_TEST_URING=1 00:02:42.292 SPDK_TEST_VFIOUSER=1 00:02:42.292 SPDK_TEST_USDT=1 00:02:42.292 SPDK_RUN_ASAN=1 00:02:42.292 SPDK_RUN_UBSAN=1 00:02:42.292 NET_TYPE=virt 00:02:42.292 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:42.298 RUN_NIGHTLY=1 00:02:42.301 [Pipeline] } 00:02:42.317 [Pipeline] // stage 00:02:42.332 [Pipeline] stage 00:02:42.334 [Pipeline] { (Run VM) 00:02:42.349 [Pipeline] sh 00:02:42.628 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:02:42.628 + echo 'Start stage prepare_nvme.sh' 00:02:42.628 Start stage prepare_nvme.sh 00:02:42.629 + [[ -n 5 ]] 00:02:42.629 + disk_prefix=ex5 00:02:42.629 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:02:42.629 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:02:42.629 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:02:42.629 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:42.629 ++ SPDK_TEST_NVMF=1 00:02:42.629 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:42.629 ++ SPDK_TEST_URING=1 00:02:42.629 ++ SPDK_TEST_VFIOUSER=1 00:02:42.629 ++ SPDK_TEST_USDT=1 00:02:42.629 ++ SPDK_RUN_ASAN=1 00:02:42.629 ++ SPDK_RUN_UBSAN=1 00:02:42.629 ++ NET_TYPE=virt 00:02:42.629 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:42.629 ++ RUN_NIGHTLY=1 00:02:42.629 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:02:42.629 + nvme_files=() 00:02:42.629 + declare -A nvme_files 00:02:42.629 + backend_dir=/var/lib/libvirt/images/backends 00:02:42.629 + nvme_files['nvme.img']=5G 00:02:42.629 + nvme_files['nvme-cmb.img']=5G 00:02:42.629 + nvme_files['nvme-multi0.img']=4G 00:02:42.629 + nvme_files['nvme-multi1.img']=4G 00:02:42.629 + nvme_files['nvme-multi2.img']=4G 00:02:42.629 + nvme_files['nvme-openstack.img']=8G 00:02:42.629 + nvme_files['nvme-zns.img']=5G 00:02:42.629 + (( SPDK_TEST_NVME_PMR == 1 )) 00:02:42.629 + (( SPDK_TEST_FTL == 1 )) 00:02:42.629 + (( SPDK_TEST_NVME_FDP == 1 )) 00:02:42.629 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:02:42.629 + for nvme in "${!nvme_files[@]}" 00:02:42.629 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:02:42.629 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:02:42.629 + for nvme in "${!nvme_files[@]}" 00:02:42.629 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:02:42.629 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:02:42.629 + for nvme in "${!nvme_files[@]}" 00:02:42.629 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:02:42.629 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:02:42.629 + for nvme in "${!nvme_files[@]}" 00:02:42.629 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:02:42.629 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:02:42.629 + for nvme in "${!nvme_files[@]}" 00:02:42.629 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:02:42.629 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:02:42.629 + for nvme in "${!nvme_files[@]}" 00:02:42.629 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:02:42.629 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:02:42.629 + for nvme in "${!nvme_files[@]}" 00:02:42.629 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:02:43.564 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:02:43.564 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:02:43.564 + echo 'End stage prepare_nvme.sh' 00:02:43.564 End stage prepare_nvme.sh 00:02:43.575 [Pipeline] sh 00:02:43.853 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:02:43.854 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex5-nvme.img -b /var/lib/libvirt/images/backends/ex5-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img -H -a -v -f fedora38 00:02:43.854 00:02:43.854 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:02:43.854 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:02:43.854 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:02:43.854 HELP=0 00:02:43.854 DRY_RUN=0 00:02:43.854 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme.img,/var/lib/libvirt/images/backends/ex5-nvme-multi0.img, 00:02:43.854 NVME_DISKS_TYPE=nvme,nvme, 00:02:43.854 NVME_AUTO_CREATE=0 00:02:43.854 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img, 00:02:43.854 NVME_CMB=,, 00:02:43.854 NVME_PMR=,, 00:02:43.854 NVME_ZNS=,, 00:02:43.854 NVME_MS=,, 00:02:43.854 NVME_FDP=,, 00:02:43.854 SPDK_VAGRANT_DISTRO=fedora38 00:02:43.854 SPDK_VAGRANT_VMCPU=10 00:02:43.854 SPDK_VAGRANT_VMRAM=12288 00:02:43.854 SPDK_VAGRANT_PROVIDER=libvirt 00:02:43.854 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:02:43.854 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:02:43.854 SPDK_OPENSTACK_NETWORK=0 00:02:43.854 VAGRANT_PACKAGE_BOX=0 00:02:43.854 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:02:43.854 FORCE_DISTRO=true 00:02:43.854 VAGRANT_BOX_VERSION= 00:02:43.854 EXTRA_VAGRANTFILES= 00:02:43.854 NIC_MODEL=e1000 00:02:43.854 00:02:43.854 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt' 00:02:43.854 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:02:47.134 Bringing machine 'default' up with 'libvirt' provider... 00:02:47.393 ==> default: Creating image (snapshot of base box volume). 00:02:47.651 ==> default: Creating domain with the following settings... 00:02:47.651 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721897094_cb6b809939210c426868 00:02:47.651 ==> default: -- Domain type: kvm 00:02:47.651 ==> default: -- Cpus: 10 00:02:47.651 ==> default: -- Feature: acpi 00:02:47.651 ==> default: -- Feature: apic 00:02:47.651 ==> default: -- Feature: pae 00:02:47.651 ==> default: -- Memory: 12288M 00:02:47.651 ==> default: -- Memory Backing: hugepages: 00:02:47.651 ==> default: -- Management MAC: 00:02:47.651 ==> default: -- Loader: 00:02:47.651 ==> default: -- Nvram: 00:02:47.651 ==> default: -- Base box: spdk/fedora38 00:02:47.651 ==> default: -- Storage pool: default 00:02:47.651 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721897094_cb6b809939210c426868.img (20G) 00:02:47.651 ==> default: -- Volume Cache: default 00:02:47.651 ==> default: -- Kernel: 00:02:47.651 ==> default: -- Initrd: 00:02:47.651 ==> default: -- Graphics Type: vnc 00:02:47.651 ==> default: -- Graphics Port: -1 00:02:47.651 ==> default: -- Graphics IP: 127.0.0.1 00:02:47.651 ==> default: -- Graphics Password: Not defined 00:02:47.651 ==> default: -- Video Type: cirrus 00:02:47.651 ==> default: -- Video VRAM: 9216 00:02:47.651 ==> default: -- Sound Type: 00:02:47.651 ==> default: -- Keymap: en-us 00:02:47.651 ==> default: -- TPM Path: 00:02:47.651 ==> default: -- INPUT: type=mouse, bus=ps2 00:02:47.651 ==> default: -- Command line args: 00:02:47.651 ==> default: -> value=-device, 00:02:47.651 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:02:47.651 ==> default: -> value=-drive, 00:02:47.651 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-0-drive0, 00:02:47.651 ==> default: -> value=-device, 00:02:47.651 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:47.651 ==> default: -> value=-device, 00:02:47.651 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:02:47.651 ==> default: -> value=-drive, 00:02:47.651 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:02:47.651 ==> default: -> value=-device, 00:02:47.651 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:47.651 ==> default: -> value=-drive, 00:02:47.651 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:02:47.651 ==> default: -> value=-device, 00:02:47.651 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:47.651 ==> default: -> value=-drive, 00:02:47.651 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:02:47.651 ==> default: -> value=-device, 00:02:47.651 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:47.651 ==> default: Creating shared folders metadata... 00:02:47.651 ==> default: Starting domain. 00:02:49.553 ==> default: Waiting for domain to get an IP address... 00:03:04.425 ==> default: Waiting for SSH to become available... 00:03:06.325 ==> default: Configuring and enabling network interfaces... 00:03:10.508 default: SSH address: 192.168.121.210:22 00:03:10.508 default: SSH username: vagrant 00:03:10.508 default: SSH auth method: private key 00:03:12.409 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:03:20.571 ==> default: Mounting SSHFS shared folder... 00:03:21.503 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:03:21.503 ==> default: Checking Mount.. 00:03:22.882 ==> default: Folder Successfully Mounted! 00:03:22.882 ==> default: Running provisioner: file... 00:03:23.448 default: ~/.gitconfig => .gitconfig 00:03:24.014 00:03:24.014 SUCCESS! 00:03:24.014 00:03:24.014 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:03:24.014 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:03:24.014 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:03:24.014 00:03:24.023 [Pipeline] } 00:03:24.040 [Pipeline] // stage 00:03:24.049 [Pipeline] dir 00:03:24.049 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt 00:03:24.051 [Pipeline] { 00:03:24.064 [Pipeline] catchError 00:03:24.065 [Pipeline] { 00:03:24.077 [Pipeline] sh 00:03:24.355 + vagrant ssh-config --host vagrant 00:03:24.355 + sed -ne /^Host/,$p 00:03:24.355 + tee ssh_conf 00:03:28.541 Host vagrant 00:03:28.541 HostName 192.168.121.210 00:03:28.541 User vagrant 00:03:28.541 Port 22 00:03:28.541 UserKnownHostsFile /dev/null 00:03:28.541 StrictHostKeyChecking no 00:03:28.541 PasswordAuthentication no 00:03:28.541 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:03:28.541 IdentitiesOnly yes 00:03:28.541 LogLevel FATAL 00:03:28.541 ForwardAgent yes 00:03:28.541 ForwardX11 yes 00:03:28.541 00:03:28.554 [Pipeline] withEnv 00:03:28.555 [Pipeline] { 00:03:28.570 [Pipeline] sh 00:03:28.877 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:03:28.877 source /etc/os-release 00:03:28.877 [[ -e /image.version ]] && img=$(< /image.version) 00:03:28.877 # Minimal, systemd-like check. 00:03:28.877 if [[ -e /.dockerenv ]]; then 00:03:28.877 # Clear garbage from the node's name: 00:03:28.877 # agt-er_autotest_547-896 -> autotest_547-896 00:03:28.877 # $HOSTNAME is the actual container id 00:03:28.877 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:03:28.877 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:03:28.877 # We can assume this is a mount from a host where container is running, 00:03:28.877 # so fetch its hostname to easily identify the target swarm worker. 00:03:28.877 container="$(< /etc/hostname) ($agent)" 00:03:28.877 else 00:03:28.877 # Fallback 00:03:28.877 container=$agent 00:03:28.877 fi 00:03:28.877 fi 00:03:28.877 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:03:28.877 00:03:28.887 [Pipeline] } 00:03:28.909 [Pipeline] // withEnv 00:03:28.919 [Pipeline] setCustomBuildProperty 00:03:28.935 [Pipeline] stage 00:03:28.937 [Pipeline] { (Tests) 00:03:28.961 [Pipeline] sh 00:03:29.238 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:03:29.252 [Pipeline] sh 00:03:29.531 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:03:29.546 [Pipeline] timeout 00:03:29.547 Timeout set to expire in 30 min 00:03:29.549 [Pipeline] { 00:03:29.566 [Pipeline] sh 00:03:29.844 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:03:30.411 HEAD is now at 704257090 lib/reduce: fix the incorrect calculation method for the number of io_unit required for metadata. 00:03:30.424 [Pipeline] sh 00:03:30.704 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:03:30.975 [Pipeline] sh 00:03:31.252 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:03:31.269 [Pipeline] sh 00:03:31.550 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:03:31.550 ++ readlink -f spdk_repo 00:03:31.550 + DIR_ROOT=/home/vagrant/spdk_repo 00:03:31.550 + [[ -n /home/vagrant/spdk_repo ]] 00:03:31.550 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:03:31.550 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:03:31.550 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:03:31.550 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:03:31.808 + [[ -d /home/vagrant/spdk_repo/output ]] 00:03:31.808 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:03:31.808 + cd /home/vagrant/spdk_repo 00:03:31.808 + source /etc/os-release 00:03:31.808 ++ NAME='Fedora Linux' 00:03:31.808 ++ VERSION='38 (Cloud Edition)' 00:03:31.808 ++ ID=fedora 00:03:31.808 ++ VERSION_ID=38 00:03:31.808 ++ VERSION_CODENAME= 00:03:31.808 ++ PLATFORM_ID=platform:f38 00:03:31.808 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:03:31.808 ++ ANSI_COLOR='0;38;2;60;110;180' 00:03:31.808 ++ LOGO=fedora-logo-icon 00:03:31.808 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:03:31.808 ++ HOME_URL=https://fedoraproject.org/ 00:03:31.808 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:03:31.808 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:03:31.808 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:03:31.808 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:03:31.808 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:03:31.808 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:03:31.808 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:03:31.808 ++ SUPPORT_END=2024-05-14 00:03:31.808 ++ VARIANT='Cloud Edition' 00:03:31.808 ++ VARIANT_ID=cloud 00:03:31.808 + uname -a 00:03:31.808 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:03:31.808 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:32.066 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:32.066 Hugepages 00:03:32.066 node hugesize free / total 00:03:32.066 node0 1048576kB 0 / 0 00:03:32.066 node0 2048kB 0 / 0 00:03:32.066 00:03:32.066 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:32.066 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:32.324 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:32.324 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:03:32.324 + rm -f /tmp/spdk-ld-path 00:03:32.324 + source autorun-spdk.conf 00:03:32.324 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:32.324 ++ SPDK_TEST_NVMF=1 00:03:32.324 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:32.324 ++ SPDK_TEST_URING=1 00:03:32.324 ++ SPDK_TEST_VFIOUSER=1 00:03:32.324 ++ SPDK_TEST_USDT=1 00:03:32.324 ++ SPDK_RUN_ASAN=1 00:03:32.324 ++ SPDK_RUN_UBSAN=1 00:03:32.325 ++ NET_TYPE=virt 00:03:32.325 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:32.325 ++ RUN_NIGHTLY=1 00:03:32.325 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:32.325 + [[ -n '' ]] 00:03:32.325 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:03:32.325 + for M in /var/spdk/build-*-manifest.txt 00:03:32.325 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:32.325 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:32.325 + for M in /var/spdk/build-*-manifest.txt 00:03:32.325 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:32.325 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:32.325 ++ uname 00:03:32.325 + [[ Linux == \L\i\n\u\x ]] 00:03:32.325 + sudo dmesg -T 00:03:32.325 + sudo dmesg --clear 00:03:32.325 + dmesg_pid=5111 00:03:32.325 + [[ Fedora Linux == FreeBSD ]] 00:03:32.325 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:32.325 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:32.325 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:32.325 + [[ -x /usr/src/fio-static/fio ]] 00:03:32.325 + sudo dmesg -Tw 00:03:32.325 + export FIO_BIN=/usr/src/fio-static/fio 00:03:32.325 + FIO_BIN=/usr/src/fio-static/fio 00:03:32.325 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:32.325 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:32.325 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:32.325 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:32.325 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:32.325 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:32.325 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:32.325 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:32.325 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:32.325 Test configuration: 00:03:32.325 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:32.325 SPDK_TEST_NVMF=1 00:03:32.325 SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:32.325 SPDK_TEST_URING=1 00:03:32.325 SPDK_TEST_VFIOUSER=1 00:03:32.325 SPDK_TEST_USDT=1 00:03:32.325 SPDK_RUN_ASAN=1 00:03:32.325 SPDK_RUN_UBSAN=1 00:03:32.325 NET_TYPE=virt 00:03:32.325 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:32.325 RUN_NIGHTLY=1 08:45:39 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:32.325 08:45:39 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:32.325 08:45:39 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:32.325 08:45:39 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:32.325 08:45:39 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:32.325 08:45:39 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:32.325 08:45:39 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:32.325 08:45:39 -- paths/export.sh@5 -- $ export PATH 00:03:32.325 08:45:39 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:32.325 08:45:39 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:03:32.325 08:45:39 -- common/autobuild_common.sh@447 -- $ date +%s 00:03:32.325 08:45:39 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721897139.XXXXXX 00:03:32.325 08:45:39 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721897139.RyjXbf 00:03:32.325 08:45:39 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:03:32.325 08:45:39 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:03:32.325 08:45:39 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:03:32.325 08:45:39 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:03:32.325 08:45:39 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:03:32.325 08:45:39 -- common/autobuild_common.sh@463 -- $ get_config_params 00:03:32.325 08:45:39 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:03:32.325 08:45:39 -- common/autotest_common.sh@10 -- $ set +x 00:03:32.584 08:45:39 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-vfio-user --with-uring' 00:03:32.584 08:45:39 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:03:32.584 08:45:39 -- pm/common@17 -- $ local monitor 00:03:32.584 08:45:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:32.584 08:45:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:32.584 08:45:39 -- pm/common@25 -- $ sleep 1 00:03:32.584 08:45:39 -- pm/common@21 -- $ date +%s 00:03:32.584 08:45:39 -- pm/common@21 -- $ date +%s 00:03:32.584 08:45:39 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721897139 00:03:32.584 08:45:39 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721897139 00:03:32.584 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721897139_collect-vmstat.pm.log 00:03:32.584 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721897139_collect-cpu-load.pm.log 00:03:33.518 08:45:40 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:03:33.518 08:45:40 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:33.518 08:45:40 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:33.518 08:45:40 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:33.518 08:45:40 -- spdk/autobuild.sh@16 -- $ date -u 00:03:33.518 Thu Jul 25 08:45:40 AM UTC 2024 00:03:33.518 08:45:40 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:33.518 v24.09-pre-321-g704257090 00:03:33.518 08:45:40 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:03:33.518 08:45:40 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:03:33.518 08:45:40 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:33.518 08:45:40 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:33.518 08:45:40 -- common/autotest_common.sh@10 -- $ set +x 00:03:33.518 ************************************ 00:03:33.518 START TEST asan 00:03:33.518 ************************************ 00:03:33.518 using asan 00:03:33.518 08:45:40 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:03:33.518 00:03:33.518 real 0m0.000s 00:03:33.518 user 0m0.000s 00:03:33.518 sys 0m0.000s 00:03:33.518 08:45:40 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:33.518 08:45:40 asan -- common/autotest_common.sh@10 -- $ set +x 00:03:33.518 ************************************ 00:03:33.518 END TEST asan 00:03:33.518 ************************************ 00:03:33.518 08:45:40 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:33.518 08:45:40 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:33.518 08:45:40 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:33.518 08:45:40 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:33.518 08:45:40 -- common/autotest_common.sh@10 -- $ set +x 00:03:33.518 ************************************ 00:03:33.518 START TEST ubsan 00:03:33.518 ************************************ 00:03:33.518 using ubsan 00:03:33.518 08:45:40 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:03:33.518 00:03:33.518 real 0m0.000s 00:03:33.518 user 0m0.000s 00:03:33.518 sys 0m0.000s 00:03:33.518 08:45:40 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:33.518 08:45:40 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:33.518 ************************************ 00:03:33.518 END TEST ubsan 00:03:33.518 ************************************ 00:03:33.518 08:45:40 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:03:33.518 08:45:40 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:33.518 08:45:40 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:33.518 08:45:40 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:33.518 08:45:40 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:33.518 08:45:40 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:33.518 08:45:40 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:33.518 08:45:40 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:33.518 08:45:40 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-vfio-user --with-uring --with-shared 00:03:33.776 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:33.776 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:34.342 Using 'verbs' RDMA provider 00:03:47.500 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:04:02.420 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:04:02.420 Creating mk/config.mk...done. 00:04:02.420 Creating mk/cc.flags.mk...done. 00:04:02.420 Type 'make' to build. 00:04:02.420 08:46:07 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:04:02.420 08:46:07 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:04:02.420 08:46:07 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:04:02.420 08:46:07 -- common/autotest_common.sh@10 -- $ set +x 00:04:02.420 ************************************ 00:04:02.420 START TEST make 00:04:02.420 ************************************ 00:04:02.420 08:46:07 make -- common/autotest_common.sh@1125 -- $ make -j10 00:04:02.420 make[1]: Nothing to be done for 'all'. 00:04:02.420 The Meson build system 00:04:02.420 Version: 1.3.1 00:04:02.420 Source dir: /home/vagrant/spdk_repo/spdk/libvfio-user 00:04:02.420 Build dir: /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:04:02.420 Build type: native build 00:04:02.420 Project name: libvfio-user 00:04:02.420 Project version: 0.0.1 00:04:02.420 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:04:02.420 C linker for the host machine: cc ld.bfd 2.39-16 00:04:02.420 Host machine cpu family: x86_64 00:04:02.420 Host machine cpu: x86_64 00:04:02.420 Run-time dependency threads found: YES 00:04:02.420 Library dl found: YES 00:04:02.420 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:04:02.420 Run-time dependency json-c found: YES 0.17 00:04:02.420 Run-time dependency cmocka found: YES 1.1.7 00:04:02.420 Program pytest-3 found: NO 00:04:02.420 Program flake8 found: NO 00:04:02.420 Program misspell-fixer found: NO 00:04:02.420 Program restructuredtext-lint found: NO 00:04:02.420 Program valgrind found: YES (/usr/bin/valgrind) 00:04:02.420 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:02.420 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:02.420 Compiler for C supports arguments -Wwrite-strings: YES 00:04:02.420 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:04:02.420 Program test-lspci.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-lspci.sh) 00:04:02.420 Program test-linkage.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-linkage.sh) 00:04:02.420 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:04:02.420 Build targets in project: 8 00:04:02.420 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:04:02.420 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:04:02.420 00:04:02.420 libvfio-user 0.0.1 00:04:02.420 00:04:02.420 User defined options 00:04:02.420 buildtype : debug 00:04:02.420 default_library: shared 00:04:02.420 libdir : /usr/local/lib 00:04:02.420 00:04:02.420 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:02.986 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:04:02.986 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:04:02.986 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:04:02.986 [3/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:04:02.986 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:04:02.986 [5/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:04:02.986 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:04:02.986 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:04:02.986 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:04:03.245 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:04:03.245 [10/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:04:03.245 [11/37] Compiling C object samples/null.p/null.c.o 00:04:03.245 [12/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:04:03.245 [13/37] Compiling C object samples/client.p/client.c.o 00:04:03.245 [14/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:04:03.245 [15/37] Compiling C object samples/lspci.p/lspci.c.o 00:04:03.245 [16/37] Linking target samples/client 00:04:03.245 [17/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:04:03.245 [18/37] Compiling C object test/unit_tests.p/mocks.c.o 00:04:03.245 [19/37] Compiling C object samples/server.p/server.c.o 00:04:03.245 [20/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:04:03.245 [21/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:04:03.245 [22/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:04:03.245 [23/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:04:03.245 [24/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:04:03.503 [25/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:04:03.503 [26/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:04:03.503 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:04:03.503 [28/37] Linking target lib/libvfio-user.so.0.0.1 00:04:03.503 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:04:03.503 [30/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:04:03.503 [31/37] Linking target test/unit_tests 00:04:03.503 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:04:03.503 [33/37] Linking target samples/server 00:04:03.503 [34/37] Linking target samples/gpio-pci-idio-16 00:04:03.761 [35/37] Linking target samples/null 00:04:03.761 [36/37] Linking target samples/shadow_ioeventfd_server 00:04:03.761 [37/37] Linking target samples/lspci 00:04:03.761 INFO: autodetecting backend as ninja 00:04:03.761 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:04:03.761 DESTDIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user meson install --quiet -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:04:04.020 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:04:04.020 ninja: no work to do. 00:04:12.150 The Meson build system 00:04:12.150 Version: 1.3.1 00:04:12.150 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:04:12.150 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:04:12.150 Build type: native build 00:04:12.150 Program cat found: YES (/usr/bin/cat) 00:04:12.150 Project name: DPDK 00:04:12.150 Project version: 24.03.0 00:04:12.150 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:04:12.150 C linker for the host machine: cc ld.bfd 2.39-16 00:04:12.150 Host machine cpu family: x86_64 00:04:12.150 Host machine cpu: x86_64 00:04:12.150 Message: ## Building in Developer Mode ## 00:04:12.150 Program pkg-config found: YES (/usr/bin/pkg-config) 00:04:12.150 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:04:12.150 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:04:12.150 Program python3 found: YES (/usr/bin/python3) 00:04:12.150 Program cat found: YES (/usr/bin/cat) 00:04:12.150 Compiler for C supports arguments -march=native: YES 00:04:12.150 Checking for size of "void *" : 8 00:04:12.150 Checking for size of "void *" : 8 (cached) 00:04:12.150 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:04:12.150 Library m found: YES 00:04:12.150 Library numa found: YES 00:04:12.150 Has header "numaif.h" : YES 00:04:12.150 Library fdt found: NO 00:04:12.150 Library execinfo found: NO 00:04:12.150 Has header "execinfo.h" : YES 00:04:12.150 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:04:12.150 Run-time dependency libarchive found: NO (tried pkgconfig) 00:04:12.150 Run-time dependency libbsd found: NO (tried pkgconfig) 00:04:12.150 Run-time dependency jansson found: NO (tried pkgconfig) 00:04:12.150 Run-time dependency openssl found: YES 3.0.9 00:04:12.150 Run-time dependency libpcap found: YES 1.10.4 00:04:12.150 Has header "pcap.h" with dependency libpcap: YES 00:04:12.150 Compiler for C supports arguments -Wcast-qual: YES 00:04:12.150 Compiler for C supports arguments -Wdeprecated: YES 00:04:12.150 Compiler for C supports arguments -Wformat: YES 00:04:12.150 Compiler for C supports arguments -Wformat-nonliteral: NO 00:04:12.150 Compiler for C supports arguments -Wformat-security: NO 00:04:12.150 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:12.150 Compiler for C supports arguments -Wmissing-prototypes: YES 00:04:12.150 Compiler for C supports arguments -Wnested-externs: YES 00:04:12.150 Compiler for C supports arguments -Wold-style-definition: YES 00:04:12.150 Compiler for C supports arguments -Wpointer-arith: YES 00:04:12.150 Compiler for C supports arguments -Wsign-compare: YES 00:04:12.150 Compiler for C supports arguments -Wstrict-prototypes: YES 00:04:12.150 Compiler for C supports arguments -Wundef: YES 00:04:12.150 Compiler for C supports arguments -Wwrite-strings: YES 00:04:12.150 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:04:12.150 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:04:12.150 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:12.150 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:04:12.150 Program objdump found: YES (/usr/bin/objdump) 00:04:12.150 Compiler for C supports arguments -mavx512f: YES 00:04:12.150 Checking if "AVX512 checking" compiles: YES 00:04:12.150 Fetching value of define "__SSE4_2__" : 1 00:04:12.150 Fetching value of define "__AES__" : 1 00:04:12.150 Fetching value of define "__AVX__" : 1 00:04:12.150 Fetching value of define "__AVX2__" : 1 00:04:12.150 Fetching value of define "__AVX512BW__" : (undefined) 00:04:12.150 Fetching value of define "__AVX512CD__" : (undefined) 00:04:12.150 Fetching value of define "__AVX512DQ__" : (undefined) 00:04:12.150 Fetching value of define "__AVX512F__" : (undefined) 00:04:12.150 Fetching value of define "__AVX512VL__" : (undefined) 00:04:12.150 Fetching value of define "__PCLMUL__" : 1 00:04:12.150 Fetching value of define "__RDRND__" : 1 00:04:12.150 Fetching value of define "__RDSEED__" : 1 00:04:12.150 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:04:12.150 Fetching value of define "__znver1__" : (undefined) 00:04:12.150 Fetching value of define "__znver2__" : (undefined) 00:04:12.150 Fetching value of define "__znver3__" : (undefined) 00:04:12.150 Fetching value of define "__znver4__" : (undefined) 00:04:12.150 Library asan found: YES 00:04:12.150 Compiler for C supports arguments -Wno-format-truncation: YES 00:04:12.150 Message: lib/log: Defining dependency "log" 00:04:12.150 Message: lib/kvargs: Defining dependency "kvargs" 00:04:12.150 Message: lib/telemetry: Defining dependency "telemetry" 00:04:12.150 Library rt found: YES 00:04:12.150 Checking for function "getentropy" : NO 00:04:12.150 Message: lib/eal: Defining dependency "eal" 00:04:12.150 Message: lib/ring: Defining dependency "ring" 00:04:12.150 Message: lib/rcu: Defining dependency "rcu" 00:04:12.150 Message: lib/mempool: Defining dependency "mempool" 00:04:12.150 Message: lib/mbuf: Defining dependency "mbuf" 00:04:12.150 Fetching value of define "__PCLMUL__" : 1 (cached) 00:04:12.150 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:04:12.150 Compiler for C supports arguments -mpclmul: YES 00:04:12.150 Compiler for C supports arguments -maes: YES 00:04:12.150 Compiler for C supports arguments -mavx512f: YES (cached) 00:04:12.150 Compiler for C supports arguments -mavx512bw: YES 00:04:12.150 Compiler for C supports arguments -mavx512dq: YES 00:04:12.150 Compiler for C supports arguments -mavx512vl: YES 00:04:12.150 Compiler for C supports arguments -mvpclmulqdq: YES 00:04:12.150 Compiler for C supports arguments -mavx2: YES 00:04:12.150 Compiler for C supports arguments -mavx: YES 00:04:12.150 Message: lib/net: Defining dependency "net" 00:04:12.150 Message: lib/meter: Defining dependency "meter" 00:04:12.150 Message: lib/ethdev: Defining dependency "ethdev" 00:04:12.150 Message: lib/pci: Defining dependency "pci" 00:04:12.150 Message: lib/cmdline: Defining dependency "cmdline" 00:04:12.150 Message: lib/hash: Defining dependency "hash" 00:04:12.150 Message: lib/timer: Defining dependency "timer" 00:04:12.150 Message: lib/compressdev: Defining dependency "compressdev" 00:04:12.150 Message: lib/cryptodev: Defining dependency "cryptodev" 00:04:12.150 Message: lib/dmadev: Defining dependency "dmadev" 00:04:12.150 Compiler for C supports arguments -Wno-cast-qual: YES 00:04:12.150 Message: lib/power: Defining dependency "power" 00:04:12.150 Message: lib/reorder: Defining dependency "reorder" 00:04:12.150 Message: lib/security: Defining dependency "security" 00:04:12.150 Has header "linux/userfaultfd.h" : YES 00:04:12.150 Has header "linux/vduse.h" : YES 00:04:12.150 Message: lib/vhost: Defining dependency "vhost" 00:04:12.150 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:04:12.150 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:04:12.150 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:04:12.150 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:04:12.150 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:04:12.150 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:04:12.150 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:04:12.150 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:04:12.150 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:04:12.150 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:04:12.150 Program doxygen found: YES (/usr/bin/doxygen) 00:04:12.150 Configuring doxy-api-html.conf using configuration 00:04:12.150 Configuring doxy-api-man.conf using configuration 00:04:12.150 Program mandb found: YES (/usr/bin/mandb) 00:04:12.150 Program sphinx-build found: NO 00:04:12.150 Configuring rte_build_config.h using configuration 00:04:12.150 Message: 00:04:12.150 ================= 00:04:12.150 Applications Enabled 00:04:12.150 ================= 00:04:12.150 00:04:12.150 apps: 00:04:12.150 00:04:12.150 00:04:12.150 Message: 00:04:12.150 ================= 00:04:12.150 Libraries Enabled 00:04:12.150 ================= 00:04:12.150 00:04:12.150 libs: 00:04:12.150 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:04:12.150 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:04:12.150 cryptodev, dmadev, power, reorder, security, vhost, 00:04:12.150 00:04:12.150 Message: 00:04:12.150 =============== 00:04:12.150 Drivers Enabled 00:04:12.150 =============== 00:04:12.150 00:04:12.150 common: 00:04:12.150 00:04:12.150 bus: 00:04:12.150 pci, vdev, 00:04:12.150 mempool: 00:04:12.150 ring, 00:04:12.150 dma: 00:04:12.150 00:04:12.150 net: 00:04:12.150 00:04:12.150 crypto: 00:04:12.150 00:04:12.150 compress: 00:04:12.150 00:04:12.150 vdpa: 00:04:12.150 00:04:12.150 00:04:12.150 Message: 00:04:12.150 ================= 00:04:12.150 Content Skipped 00:04:12.150 ================= 00:04:12.150 00:04:12.150 apps: 00:04:12.150 dumpcap: explicitly disabled via build config 00:04:12.150 graph: explicitly disabled via build config 00:04:12.151 pdump: explicitly disabled via build config 00:04:12.151 proc-info: explicitly disabled via build config 00:04:12.151 test-acl: explicitly disabled via build config 00:04:12.151 test-bbdev: explicitly disabled via build config 00:04:12.151 test-cmdline: explicitly disabled via build config 00:04:12.151 test-compress-perf: explicitly disabled via build config 00:04:12.151 test-crypto-perf: explicitly disabled via build config 00:04:12.151 test-dma-perf: explicitly disabled via build config 00:04:12.151 test-eventdev: explicitly disabled via build config 00:04:12.151 test-fib: explicitly disabled via build config 00:04:12.151 test-flow-perf: explicitly disabled via build config 00:04:12.151 test-gpudev: explicitly disabled via build config 00:04:12.151 test-mldev: explicitly disabled via build config 00:04:12.151 test-pipeline: explicitly disabled via build config 00:04:12.151 test-pmd: explicitly disabled via build config 00:04:12.151 test-regex: explicitly disabled via build config 00:04:12.151 test-sad: explicitly disabled via build config 00:04:12.151 test-security-perf: explicitly disabled via build config 00:04:12.151 00:04:12.151 libs: 00:04:12.151 argparse: explicitly disabled via build config 00:04:12.151 metrics: explicitly disabled via build config 00:04:12.151 acl: explicitly disabled via build config 00:04:12.151 bbdev: explicitly disabled via build config 00:04:12.151 bitratestats: explicitly disabled via build config 00:04:12.151 bpf: explicitly disabled via build config 00:04:12.151 cfgfile: explicitly disabled via build config 00:04:12.151 distributor: explicitly disabled via build config 00:04:12.151 efd: explicitly disabled via build config 00:04:12.151 eventdev: explicitly disabled via build config 00:04:12.151 dispatcher: explicitly disabled via build config 00:04:12.151 gpudev: explicitly disabled via build config 00:04:12.151 gro: explicitly disabled via build config 00:04:12.151 gso: explicitly disabled via build config 00:04:12.151 ip_frag: explicitly disabled via build config 00:04:12.151 jobstats: explicitly disabled via build config 00:04:12.151 latencystats: explicitly disabled via build config 00:04:12.151 lpm: explicitly disabled via build config 00:04:12.151 member: explicitly disabled via build config 00:04:12.151 pcapng: explicitly disabled via build config 00:04:12.151 rawdev: explicitly disabled via build config 00:04:12.151 regexdev: explicitly disabled via build config 00:04:12.151 mldev: explicitly disabled via build config 00:04:12.151 rib: explicitly disabled via build config 00:04:12.151 sched: explicitly disabled via build config 00:04:12.151 stack: explicitly disabled via build config 00:04:12.151 ipsec: explicitly disabled via build config 00:04:12.151 pdcp: explicitly disabled via build config 00:04:12.151 fib: explicitly disabled via build config 00:04:12.151 port: explicitly disabled via build config 00:04:12.151 pdump: explicitly disabled via build config 00:04:12.151 table: explicitly disabled via build config 00:04:12.151 pipeline: explicitly disabled via build config 00:04:12.151 graph: explicitly disabled via build config 00:04:12.151 node: explicitly disabled via build config 00:04:12.151 00:04:12.151 drivers: 00:04:12.151 common/cpt: not in enabled drivers build config 00:04:12.151 common/dpaax: not in enabled drivers build config 00:04:12.151 common/iavf: not in enabled drivers build config 00:04:12.151 common/idpf: not in enabled drivers build config 00:04:12.151 common/ionic: not in enabled drivers build config 00:04:12.151 common/mvep: not in enabled drivers build config 00:04:12.151 common/octeontx: not in enabled drivers build config 00:04:12.151 bus/auxiliary: not in enabled drivers build config 00:04:12.151 bus/cdx: not in enabled drivers build config 00:04:12.151 bus/dpaa: not in enabled drivers build config 00:04:12.151 bus/fslmc: not in enabled drivers build config 00:04:12.151 bus/ifpga: not in enabled drivers build config 00:04:12.151 bus/platform: not in enabled drivers build config 00:04:12.151 bus/uacce: not in enabled drivers build config 00:04:12.151 bus/vmbus: not in enabled drivers build config 00:04:12.151 common/cnxk: not in enabled drivers build config 00:04:12.151 common/mlx5: not in enabled drivers build config 00:04:12.151 common/nfp: not in enabled drivers build config 00:04:12.151 common/nitrox: not in enabled drivers build config 00:04:12.151 common/qat: not in enabled drivers build config 00:04:12.151 common/sfc_efx: not in enabled drivers build config 00:04:12.151 mempool/bucket: not in enabled drivers build config 00:04:12.151 mempool/cnxk: not in enabled drivers build config 00:04:12.151 mempool/dpaa: not in enabled drivers build config 00:04:12.151 mempool/dpaa2: not in enabled drivers build config 00:04:12.151 mempool/octeontx: not in enabled drivers build config 00:04:12.151 mempool/stack: not in enabled drivers build config 00:04:12.151 dma/cnxk: not in enabled drivers build config 00:04:12.151 dma/dpaa: not in enabled drivers build config 00:04:12.151 dma/dpaa2: not in enabled drivers build config 00:04:12.151 dma/hisilicon: not in enabled drivers build config 00:04:12.151 dma/idxd: not in enabled drivers build config 00:04:12.151 dma/ioat: not in enabled drivers build config 00:04:12.151 dma/skeleton: not in enabled drivers build config 00:04:12.151 net/af_packet: not in enabled drivers build config 00:04:12.151 net/af_xdp: not in enabled drivers build config 00:04:12.151 net/ark: not in enabled drivers build config 00:04:12.151 net/atlantic: not in enabled drivers build config 00:04:12.151 net/avp: not in enabled drivers build config 00:04:12.151 net/axgbe: not in enabled drivers build config 00:04:12.151 net/bnx2x: not in enabled drivers build config 00:04:12.151 net/bnxt: not in enabled drivers build config 00:04:12.151 net/bonding: not in enabled drivers build config 00:04:12.151 net/cnxk: not in enabled drivers build config 00:04:12.151 net/cpfl: not in enabled drivers build config 00:04:12.151 net/cxgbe: not in enabled drivers build config 00:04:12.151 net/dpaa: not in enabled drivers build config 00:04:12.151 net/dpaa2: not in enabled drivers build config 00:04:12.151 net/e1000: not in enabled drivers build config 00:04:12.151 net/ena: not in enabled drivers build config 00:04:12.151 net/enetc: not in enabled drivers build config 00:04:12.151 net/enetfec: not in enabled drivers build config 00:04:12.151 net/enic: not in enabled drivers build config 00:04:12.151 net/failsafe: not in enabled drivers build config 00:04:12.151 net/fm10k: not in enabled drivers build config 00:04:12.151 net/gve: not in enabled drivers build config 00:04:12.151 net/hinic: not in enabled drivers build config 00:04:12.151 net/hns3: not in enabled drivers build config 00:04:12.151 net/i40e: not in enabled drivers build config 00:04:12.151 net/iavf: not in enabled drivers build config 00:04:12.151 net/ice: not in enabled drivers build config 00:04:12.151 net/idpf: not in enabled drivers build config 00:04:12.151 net/igc: not in enabled drivers build config 00:04:12.151 net/ionic: not in enabled drivers build config 00:04:12.151 net/ipn3ke: not in enabled drivers build config 00:04:12.151 net/ixgbe: not in enabled drivers build config 00:04:12.151 net/mana: not in enabled drivers build config 00:04:12.151 net/memif: not in enabled drivers build config 00:04:12.151 net/mlx4: not in enabled drivers build config 00:04:12.151 net/mlx5: not in enabled drivers build config 00:04:12.151 net/mvneta: not in enabled drivers build config 00:04:12.151 net/mvpp2: not in enabled drivers build config 00:04:12.151 net/netvsc: not in enabled drivers build config 00:04:12.151 net/nfb: not in enabled drivers build config 00:04:12.151 net/nfp: not in enabled drivers build config 00:04:12.151 net/ngbe: not in enabled drivers build config 00:04:12.151 net/null: not in enabled drivers build config 00:04:12.151 net/octeontx: not in enabled drivers build config 00:04:12.151 net/octeon_ep: not in enabled drivers build config 00:04:12.151 net/pcap: not in enabled drivers build config 00:04:12.151 net/pfe: not in enabled drivers build config 00:04:12.151 net/qede: not in enabled drivers build config 00:04:12.151 net/ring: not in enabled drivers build config 00:04:12.151 net/sfc: not in enabled drivers build config 00:04:12.151 net/softnic: not in enabled drivers build config 00:04:12.151 net/tap: not in enabled drivers build config 00:04:12.151 net/thunderx: not in enabled drivers build config 00:04:12.151 net/txgbe: not in enabled drivers build config 00:04:12.151 net/vdev_netvsc: not in enabled drivers build config 00:04:12.151 net/vhost: not in enabled drivers build config 00:04:12.151 net/virtio: not in enabled drivers build config 00:04:12.151 net/vmxnet3: not in enabled drivers build config 00:04:12.151 raw/*: missing internal dependency, "rawdev" 00:04:12.151 crypto/armv8: not in enabled drivers build config 00:04:12.151 crypto/bcmfs: not in enabled drivers build config 00:04:12.151 crypto/caam_jr: not in enabled drivers build config 00:04:12.151 crypto/ccp: not in enabled drivers build config 00:04:12.151 crypto/cnxk: not in enabled drivers build config 00:04:12.151 crypto/dpaa_sec: not in enabled drivers build config 00:04:12.151 crypto/dpaa2_sec: not in enabled drivers build config 00:04:12.151 crypto/ipsec_mb: not in enabled drivers build config 00:04:12.151 crypto/mlx5: not in enabled drivers build config 00:04:12.151 crypto/mvsam: not in enabled drivers build config 00:04:12.151 crypto/nitrox: not in enabled drivers build config 00:04:12.151 crypto/null: not in enabled drivers build config 00:04:12.151 crypto/octeontx: not in enabled drivers build config 00:04:12.151 crypto/openssl: not in enabled drivers build config 00:04:12.151 crypto/scheduler: not in enabled drivers build config 00:04:12.151 crypto/uadk: not in enabled drivers build config 00:04:12.151 crypto/virtio: not in enabled drivers build config 00:04:12.151 compress/isal: not in enabled drivers build config 00:04:12.152 compress/mlx5: not in enabled drivers build config 00:04:12.152 compress/nitrox: not in enabled drivers build config 00:04:12.152 compress/octeontx: not in enabled drivers build config 00:04:12.152 compress/zlib: not in enabled drivers build config 00:04:12.152 regex/*: missing internal dependency, "regexdev" 00:04:12.152 ml/*: missing internal dependency, "mldev" 00:04:12.152 vdpa/ifc: not in enabled drivers build config 00:04:12.152 vdpa/mlx5: not in enabled drivers build config 00:04:12.152 vdpa/nfp: not in enabled drivers build config 00:04:12.152 vdpa/sfc: not in enabled drivers build config 00:04:12.152 event/*: missing internal dependency, "eventdev" 00:04:12.152 baseband/*: missing internal dependency, "bbdev" 00:04:12.152 gpu/*: missing internal dependency, "gpudev" 00:04:12.152 00:04:12.152 00:04:12.152 Build targets in project: 85 00:04:12.152 00:04:12.152 DPDK 24.03.0 00:04:12.152 00:04:12.152 User defined options 00:04:12.152 buildtype : debug 00:04:12.152 default_library : shared 00:04:12.152 libdir : lib 00:04:12.152 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:04:12.152 b_sanitize : address 00:04:12.152 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:04:12.152 c_link_args : 00:04:12.152 cpu_instruction_set: native 00:04:12.152 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:04:12.152 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:04:12.152 enable_docs : false 00:04:12.152 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:04:12.152 enable_kmods : false 00:04:12.152 max_lcores : 128 00:04:12.152 tests : false 00:04:12.152 00:04:12.152 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:12.411 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:04:12.668 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:04:12.668 [2/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:04:12.668 [3/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:04:12.668 [4/268] Linking static target lib/librte_log.a 00:04:12.668 [5/268] Linking static target lib/librte_kvargs.a 00:04:12.668 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:04:13.232 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:04:13.232 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:04:13.232 [9/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:04:13.232 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:04:13.232 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:04:13.489 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:04:13.489 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:04:13.489 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:04:13.489 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:04:13.746 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:04:13.746 [17/268] Linking static target lib/librte_telemetry.a 00:04:13.746 [18/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:04:13.746 [19/268] Linking target lib/librte_log.so.24.1 00:04:13.746 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:04:14.003 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:04:14.272 [22/268] Linking target lib/librte_kvargs.so.24.1 00:04:14.272 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:04:14.272 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:04:14.272 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:04:14.272 [26/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:04:14.556 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:04:14.556 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:04:14.556 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:04:14.556 [30/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:04:14.556 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:04:14.813 [32/268] Linking target lib/librte_telemetry.so.24.1 00:04:14.813 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:04:14.813 [34/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:04:14.813 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:04:15.071 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:04:15.071 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:04:15.329 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:04:15.329 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:04:15.329 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:04:15.329 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:04:15.587 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:04:15.587 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:04:15.587 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:04:15.587 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:04:15.845 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:04:15.845 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:04:15.845 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:04:15.845 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:04:16.103 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:04:16.360 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:04:16.360 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:04:16.360 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:04:16.618 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:04:16.875 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:04:16.875 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:04:16.875 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:04:16.875 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:04:16.875 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:04:17.132 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:04:17.132 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:04:17.132 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:04:17.132 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:04:17.389 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:04:17.646 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:04:17.646 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:04:17.904 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:04:17.904 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:04:18.162 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:04:18.162 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:04:18.162 [71/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:04:18.419 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:04:18.419 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:04:18.420 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:04:18.420 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:04:18.420 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:04:18.677 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:04:18.678 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:04:18.678 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:04:18.935 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:04:18.935 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:04:19.193 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:04:19.193 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:04:19.193 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:04:19.451 [85/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:04:19.451 [86/268] Linking static target lib/librte_eal.a 00:04:19.451 [87/268] Linking static target lib/librte_ring.a 00:04:19.709 [88/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:04:19.709 [89/268] Linking static target lib/librte_rcu.a 00:04:19.709 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:04:19.967 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:04:19.967 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:04:19.967 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:04:19.967 [94/268] Linking static target lib/librte_mempool.a 00:04:19.967 [95/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:04:20.225 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:04:20.225 [97/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:04:20.483 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:04:20.741 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:04:20.741 [100/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:04:20.741 [101/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:04:20.999 [102/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:04:20.999 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:04:21.259 [104/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:04:21.259 [105/268] Linking static target lib/librte_mbuf.a 00:04:21.259 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:04:21.259 [107/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:04:21.259 [108/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:04:21.259 [109/268] Linking static target lib/librte_net.a 00:04:21.518 [110/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:04:21.518 [111/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:04:21.518 [112/268] Linking static target lib/librte_meter.a 00:04:22.085 [113/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:04:22.085 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:04:22.085 [115/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:04:22.342 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:04:22.342 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:04:22.599 [118/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:04:22.599 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:04:22.856 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:04:23.114 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:04:23.114 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:04:23.114 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:04:23.372 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:04:23.630 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:04:23.630 [126/268] Linking static target lib/librte_pci.a 00:04:23.630 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:04:23.630 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:04:23.888 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:04:23.888 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:04:23.888 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:04:23.888 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:04:23.889 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:04:23.889 [134/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:24.146 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:04:24.146 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:04:24.146 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:04:24.146 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:04:24.146 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:04:24.146 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:04:24.146 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:04:24.146 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:04:24.146 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:04:24.404 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:04:24.404 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:04:24.662 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:04:24.662 [147/268] Linking static target lib/librte_cmdline.a 00:04:24.920 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:04:24.920 [149/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:04:25.178 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:04:25.178 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:04:25.178 [152/268] Linking static target lib/librte_timer.a 00:04:25.178 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:04:25.178 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:04:25.436 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:04:25.695 [156/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:04:25.695 [157/268] Linking static target lib/librte_hash.a 00:04:25.695 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:04:25.695 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:04:25.695 [160/268] Linking static target lib/librte_compressdev.a 00:04:25.952 [161/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:04:25.952 [162/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:04:25.952 [163/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:04:25.952 [164/268] Linking static target lib/librte_ethdev.a 00:04:26.211 [165/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:04:26.211 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:04:26.211 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:04:26.469 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:04:26.469 [169/268] Linking static target lib/librte_dmadev.a 00:04:26.469 [170/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:04:26.469 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:04:26.726 [172/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:04:26.726 [173/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:26.983 [174/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:04:26.983 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:04:26.983 [176/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:04:27.241 [177/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:04:27.241 [178/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:27.499 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:04:27.499 [180/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:04:27.499 [181/268] Linking static target lib/librte_cryptodev.a 00:04:27.499 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:04:27.499 [183/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:04:27.759 [184/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:04:27.759 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:04:27.759 [186/268] Linking static target lib/librte_power.a 00:04:28.026 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:04:28.026 [188/268] Linking static target lib/librte_reorder.a 00:04:28.296 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:04:28.296 [190/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:04:28.296 [191/268] Linking static target lib/librte_security.a 00:04:28.296 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:04:28.553 [193/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:04:28.553 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:04:29.117 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:04:29.117 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:04:29.117 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:04:29.375 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:04:29.375 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:04:29.939 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:04:29.939 [201/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:04:29.939 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:04:30.197 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:04:30.197 [204/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:04:30.197 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:04:30.197 [206/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:30.454 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:04:30.454 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:04:30.454 [209/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:04:30.454 [210/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:04:30.454 [211/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:04:30.711 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:04:30.711 [213/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:30.712 [214/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:30.712 [215/268] Linking static target drivers/librte_bus_vdev.a 00:04:30.970 [216/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:04:30.970 [217/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:30.970 [218/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:30.970 [219/268] Linking static target drivers/librte_bus_pci.a 00:04:30.970 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:04:30.970 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:04:30.970 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:31.227 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:04:31.227 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:31.227 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:31.227 [226/268] Linking static target drivers/librte_mempool_ring.a 00:04:31.486 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:32.053 [228/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:04:32.053 [229/268] Linking target lib/librte_eal.so.24.1 00:04:32.311 [230/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:04:32.311 [231/268] Linking target lib/librte_ring.so.24.1 00:04:32.311 [232/268] Linking target lib/librte_dmadev.so.24.1 00:04:32.311 [233/268] Linking target lib/librte_meter.so.24.1 00:04:32.311 [234/268] Linking target lib/librte_pci.so.24.1 00:04:32.311 [235/268] Linking target lib/librte_timer.so.24.1 00:04:32.311 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:04:32.311 [237/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:04:32.311 [238/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:04:32.311 [239/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:04:32.311 [240/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:04:32.311 [241/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:04:32.570 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:04:32.570 [243/268] Linking target lib/librte_rcu.so.24.1 00:04:32.570 [244/268] Linking target lib/librte_mempool.so.24.1 00:04:32.570 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:04:32.570 [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:04:32.570 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:04:32.570 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:04:32.570 [249/268] Linking target lib/librte_mbuf.so.24.1 00:04:32.828 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:04:32.828 [251/268] Linking target lib/librte_compressdev.so.24.1 00:04:32.828 [252/268] Linking target lib/librte_reorder.so.24.1 00:04:32.828 [253/268] Linking target lib/librte_net.so.24.1 00:04:32.828 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:04:33.086 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:04:33.086 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:04:33.086 [257/268] Linking target lib/librte_hash.so.24.1 00:04:33.086 [258/268] Linking target lib/librte_cmdline.so.24.1 00:04:33.086 [259/268] Linking target lib/librte_security.so.24.1 00:04:33.086 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:04:34.110 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:34.110 [262/268] Linking target lib/librte_ethdev.so.24.1 00:04:34.367 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:04:34.367 [264/268] Linking target lib/librte_power.so.24.1 00:04:36.898 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:04:36.898 [266/268] Linking static target lib/librte_vhost.a 00:04:38.799 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:04:38.799 [268/268] Linking target lib/librte_vhost.so.24.1 00:04:38.799 INFO: autodetecting backend as ninja 00:04:38.799 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:04:40.189 CC lib/log/log.o 00:04:40.189 CC lib/log/log_flags.o 00:04:40.189 CC lib/log/log_deprecated.o 00:04:40.189 CC lib/ut_mock/mock.o 00:04:40.189 CC lib/ut/ut.o 00:04:40.189 LIB libspdk_log.a 00:04:40.189 LIB libspdk_ut_mock.a 00:04:40.189 LIB libspdk_ut.a 00:04:40.447 SO libspdk_log.so.7.0 00:04:40.447 SO libspdk_ut_mock.so.6.0 00:04:40.447 SO libspdk_ut.so.2.0 00:04:40.447 SYMLINK libspdk_ut_mock.so 00:04:40.447 SYMLINK libspdk_log.so 00:04:40.447 SYMLINK libspdk_ut.so 00:04:40.704 CXX lib/trace_parser/trace.o 00:04:40.704 CC lib/util/base64.o 00:04:40.704 CC lib/util/bit_array.o 00:04:40.704 CC lib/util/cpuset.o 00:04:40.704 CC lib/util/crc16.o 00:04:40.704 CC lib/ioat/ioat.o 00:04:40.704 CC lib/util/crc32.o 00:04:40.704 CC lib/dma/dma.o 00:04:40.704 CC lib/util/crc32c.o 00:04:40.704 CC lib/vfio_user/host/vfio_user_pci.o 00:04:40.704 CC lib/vfio_user/host/vfio_user.o 00:04:40.704 CC lib/util/crc32_ieee.o 00:04:40.962 CC lib/util/crc64.o 00:04:40.962 CC lib/util/dif.o 00:04:40.962 CC lib/util/fd.o 00:04:40.962 CC lib/util/fd_group.o 00:04:40.962 LIB libspdk_dma.a 00:04:40.962 CC lib/util/file.o 00:04:40.962 SO libspdk_dma.so.4.0 00:04:40.962 CC lib/util/hexlify.o 00:04:40.962 CC lib/util/iov.o 00:04:41.219 LIB libspdk_ioat.a 00:04:41.219 SYMLINK libspdk_dma.so 00:04:41.219 CC lib/util/math.o 00:04:41.219 CC lib/util/net.o 00:04:41.219 SO libspdk_ioat.so.7.0 00:04:41.219 LIB libspdk_vfio_user.a 00:04:41.219 CC lib/util/pipe.o 00:04:41.219 SYMLINK libspdk_ioat.so 00:04:41.219 CC lib/util/strerror_tls.o 00:04:41.219 SO libspdk_vfio_user.so.5.0 00:04:41.219 CC lib/util/string.o 00:04:41.219 CC lib/util/uuid.o 00:04:41.219 CC lib/util/xor.o 00:04:41.219 SYMLINK libspdk_vfio_user.so 00:04:41.219 CC lib/util/zipf.o 00:04:41.784 LIB libspdk_util.a 00:04:41.784 SO libspdk_util.so.10.0 00:04:42.041 LIB libspdk_trace_parser.a 00:04:42.041 SYMLINK libspdk_util.so 00:04:42.041 SO libspdk_trace_parser.so.5.0 00:04:42.299 CC lib/rdma_provider/common.o 00:04:42.299 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:42.299 CC lib/vmd/vmd.o 00:04:42.299 CC lib/vmd/led.o 00:04:42.299 CC lib/env_dpdk/env.o 00:04:42.299 CC lib/rdma_utils/rdma_utils.o 00:04:42.299 CC lib/json/json_parse.o 00:04:42.299 SYMLINK libspdk_trace_parser.so 00:04:42.299 CC lib/conf/conf.o 00:04:42.299 CC lib/json/json_util.o 00:04:42.299 CC lib/idxd/idxd.o 00:04:42.556 CC lib/json/json_write.o 00:04:42.556 CC lib/idxd/idxd_user.o 00:04:42.556 LIB libspdk_rdma_provider.a 00:04:42.556 SO libspdk_rdma_provider.so.6.0 00:04:42.556 LIB libspdk_conf.a 00:04:42.556 CC lib/idxd/idxd_kernel.o 00:04:42.556 CC lib/env_dpdk/memory.o 00:04:42.556 SO libspdk_conf.so.6.0 00:04:42.556 SYMLINK libspdk_rdma_provider.so 00:04:42.813 CC lib/env_dpdk/pci.o 00:04:42.813 LIB libspdk_rdma_utils.a 00:04:42.813 SO libspdk_rdma_utils.so.1.0 00:04:42.813 SYMLINK libspdk_conf.so 00:04:42.813 CC lib/env_dpdk/init.o 00:04:42.813 SYMLINK libspdk_rdma_utils.so 00:04:42.813 CC lib/env_dpdk/threads.o 00:04:42.813 CC lib/env_dpdk/pci_ioat.o 00:04:42.813 CC lib/env_dpdk/pci_virtio.o 00:04:42.813 LIB libspdk_json.a 00:04:42.813 SO libspdk_json.so.6.0 00:04:43.070 CC lib/env_dpdk/pci_vmd.o 00:04:43.070 SYMLINK libspdk_json.so 00:04:43.070 CC lib/env_dpdk/pci_idxd.o 00:04:43.070 CC lib/env_dpdk/pci_event.o 00:04:43.070 CC lib/env_dpdk/sigbus_handler.o 00:04:43.070 CC lib/env_dpdk/pci_dpdk.o 00:04:43.070 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:43.070 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:43.328 LIB libspdk_vmd.a 00:04:43.328 SO libspdk_vmd.so.6.0 00:04:43.328 LIB libspdk_idxd.a 00:04:43.328 SYMLINK libspdk_vmd.so 00:04:43.328 CC lib/jsonrpc/jsonrpc_server.o 00:04:43.328 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:43.328 CC lib/jsonrpc/jsonrpc_client.o 00:04:43.328 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:43.328 SO libspdk_idxd.so.12.0 00:04:43.586 SYMLINK libspdk_idxd.so 00:04:43.586 LIB libspdk_jsonrpc.a 00:04:43.844 SO libspdk_jsonrpc.so.6.0 00:04:43.844 SYMLINK libspdk_jsonrpc.so 00:04:44.128 CC lib/rpc/rpc.o 00:04:44.388 LIB libspdk_rpc.a 00:04:44.388 SO libspdk_rpc.so.6.0 00:04:44.646 SYMLINK libspdk_rpc.so 00:04:44.646 LIB libspdk_env_dpdk.a 00:04:44.646 SO libspdk_env_dpdk.so.15.0 00:04:44.904 CC lib/trace/trace.o 00:04:44.904 CC lib/trace/trace_flags.o 00:04:44.904 CC lib/trace/trace_rpc.o 00:04:44.904 CC lib/keyring/keyring.o 00:04:44.904 CC lib/keyring/keyring_rpc.o 00:04:44.904 CC lib/notify/notify.o 00:04:44.904 CC lib/notify/notify_rpc.o 00:04:44.904 SYMLINK libspdk_env_dpdk.so 00:04:44.904 LIB libspdk_notify.a 00:04:45.162 SO libspdk_notify.so.6.0 00:04:45.162 LIB libspdk_keyring.a 00:04:45.162 SYMLINK libspdk_notify.so 00:04:45.162 LIB libspdk_trace.a 00:04:45.162 SO libspdk_keyring.so.1.0 00:04:45.162 SO libspdk_trace.so.10.0 00:04:45.162 SYMLINK libspdk_keyring.so 00:04:45.420 SYMLINK libspdk_trace.so 00:04:45.678 CC lib/sock/sock.o 00:04:45.678 CC lib/sock/sock_rpc.o 00:04:45.678 CC lib/thread/thread.o 00:04:45.678 CC lib/thread/iobuf.o 00:04:46.243 LIB libspdk_sock.a 00:04:46.243 SO libspdk_sock.so.10.0 00:04:46.243 SYMLINK libspdk_sock.so 00:04:46.808 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:46.808 CC lib/nvme/nvme_fabric.o 00:04:46.808 CC lib/nvme/nvme_ctrlr.o 00:04:46.808 CC lib/nvme/nvme_pcie_common.o 00:04:46.808 CC lib/nvme/nvme_ns_cmd.o 00:04:46.808 CC lib/nvme/nvme_ns.o 00:04:46.808 CC lib/nvme/nvme_pcie.o 00:04:46.808 CC lib/nvme/nvme.o 00:04:46.808 CC lib/nvme/nvme_qpair.o 00:04:47.742 CC lib/nvme/nvme_quirks.o 00:04:47.742 CC lib/nvme/nvme_transport.o 00:04:47.742 LIB libspdk_thread.a 00:04:47.742 CC lib/nvme/nvme_discovery.o 00:04:47.742 SO libspdk_thread.so.10.1 00:04:47.742 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:47.742 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:47.742 SYMLINK libspdk_thread.so 00:04:47.742 CC lib/nvme/nvme_tcp.o 00:04:48.000 CC lib/nvme/nvme_opal.o 00:04:48.000 CC lib/nvme/nvme_io_msg.o 00:04:48.000 CC lib/nvme/nvme_poll_group.o 00:04:48.258 CC lib/nvme/nvme_zns.o 00:04:48.258 CC lib/nvme/nvme_stubs.o 00:04:48.516 CC lib/nvme/nvme_auth.o 00:04:48.516 CC lib/nvme/nvme_cuse.o 00:04:48.516 CC lib/nvme/nvme_vfio_user.o 00:04:48.516 CC lib/nvme/nvme_rdma.o 00:04:48.774 CC lib/accel/accel.o 00:04:48.774 CC lib/accel/accel_rpc.o 00:04:49.031 CC lib/accel/accel_sw.o 00:04:49.305 CC lib/blob/blobstore.o 00:04:49.305 CC lib/blob/request.o 00:04:49.305 CC lib/blob/zeroes.o 00:04:49.587 CC lib/init/json_config.o 00:04:49.587 CC lib/blob/blob_bs_dev.o 00:04:49.587 CC lib/init/subsystem.o 00:04:49.587 CC lib/init/subsystem_rpc.o 00:04:49.845 CC lib/init/rpc.o 00:04:49.845 CC lib/virtio/virtio.o 00:04:49.845 CC lib/virtio/virtio_vhost_user.o 00:04:49.845 CC lib/virtio/virtio_pci.o 00:04:49.845 CC lib/virtio/virtio_vfio_user.o 00:04:49.845 LIB libspdk_init.a 00:04:49.845 CC lib/vfu_tgt/tgt_endpoint.o 00:04:49.845 CC lib/vfu_tgt/tgt_rpc.o 00:04:50.103 SO libspdk_init.so.5.0 00:04:50.103 LIB libspdk_accel.a 00:04:50.103 SYMLINK libspdk_init.so 00:04:50.103 SO libspdk_accel.so.16.0 00:04:50.103 SYMLINK libspdk_accel.so 00:04:50.362 CC lib/event/app.o 00:04:50.362 CC lib/event/reactor.o 00:04:50.362 CC lib/event/log_rpc.o 00:04:50.362 CC lib/event/scheduler_static.o 00:04:50.362 CC lib/event/app_rpc.o 00:04:50.362 LIB libspdk_virtio.a 00:04:50.362 LIB libspdk_nvme.a 00:04:50.362 SO libspdk_virtio.so.7.0 00:04:50.362 LIB libspdk_vfu_tgt.a 00:04:50.362 SO libspdk_vfu_tgt.so.3.0 00:04:50.362 CC lib/bdev/bdev.o 00:04:50.362 CC lib/bdev/bdev_rpc.o 00:04:50.362 SYMLINK libspdk_virtio.so 00:04:50.362 CC lib/bdev/bdev_zone.o 00:04:50.620 SYMLINK libspdk_vfu_tgt.so 00:04:50.620 CC lib/bdev/part.o 00:04:50.620 CC lib/bdev/scsi_nvme.o 00:04:50.620 SO libspdk_nvme.so.13.1 00:04:50.880 LIB libspdk_event.a 00:04:50.880 SYMLINK libspdk_nvme.so 00:04:50.880 SO libspdk_event.so.14.0 00:04:51.138 SYMLINK libspdk_event.so 00:04:53.670 LIB libspdk_blob.a 00:04:53.936 SO libspdk_blob.so.11.0 00:04:53.936 LIB libspdk_bdev.a 00:04:54.204 SYMLINK libspdk_blob.so 00:04:54.204 SO libspdk_bdev.so.16.0 00:04:54.204 SYMLINK libspdk_bdev.so 00:04:54.204 CC lib/blobfs/blobfs.o 00:04:54.204 CC lib/blobfs/tree.o 00:04:54.204 CC lib/lvol/lvol.o 00:04:54.462 CC lib/nvmf/ctrlr.o 00:04:54.462 CC lib/nvmf/ctrlr_discovery.o 00:04:54.462 CC lib/nvmf/ctrlr_bdev.o 00:04:54.462 CC lib/ftl/ftl_core.o 00:04:54.462 CC lib/ublk/ublk.o 00:04:54.462 CC lib/nbd/nbd.o 00:04:54.462 CC lib/scsi/dev.o 00:04:54.462 CC lib/scsi/lun.o 00:04:54.720 CC lib/scsi/port.o 00:04:54.720 CC lib/scsi/scsi.o 00:04:54.979 CC lib/ftl/ftl_init.o 00:04:54.979 CC lib/scsi/scsi_bdev.o 00:04:54.979 CC lib/nbd/nbd_rpc.o 00:04:54.979 CC lib/scsi/scsi_pr.o 00:04:54.979 CC lib/ftl/ftl_layout.o 00:04:55.237 CC lib/scsi/scsi_rpc.o 00:04:55.237 LIB libspdk_nbd.a 00:04:55.237 SO libspdk_nbd.so.7.0 00:04:55.237 CC lib/ublk/ublk_rpc.o 00:04:55.237 CC lib/scsi/task.o 00:04:55.237 SYMLINK libspdk_nbd.so 00:04:55.237 CC lib/nvmf/subsystem.o 00:04:55.237 LIB libspdk_blobfs.a 00:04:55.237 CC lib/nvmf/nvmf.o 00:04:55.496 SO libspdk_blobfs.so.10.0 00:04:55.496 CC lib/ftl/ftl_debug.o 00:04:55.496 LIB libspdk_ublk.a 00:04:55.496 CC lib/ftl/ftl_io.o 00:04:55.496 SYMLINK libspdk_blobfs.so 00:04:55.496 CC lib/nvmf/nvmf_rpc.o 00:04:55.496 SO libspdk_ublk.so.3.0 00:04:55.496 LIB libspdk_scsi.a 00:04:55.496 CC lib/nvmf/transport.o 00:04:55.496 LIB libspdk_lvol.a 00:04:55.496 SYMLINK libspdk_ublk.so 00:04:55.496 CC lib/ftl/ftl_sb.o 00:04:55.496 SO libspdk_scsi.so.9.0 00:04:55.496 SO libspdk_lvol.so.10.0 00:04:55.754 SYMLINK libspdk_lvol.so 00:04:55.754 CC lib/nvmf/tcp.o 00:04:55.754 CC lib/nvmf/stubs.o 00:04:55.754 SYMLINK libspdk_scsi.so 00:04:55.754 CC lib/nvmf/mdns_server.o 00:04:55.754 CC lib/nvmf/vfio_user.o 00:04:55.754 CC lib/ftl/ftl_l2p.o 00:04:56.013 CC lib/ftl/ftl_l2p_flat.o 00:04:56.271 CC lib/nvmf/rdma.o 00:04:56.271 CC lib/ftl/ftl_nv_cache.o 00:04:56.271 CC lib/ftl/ftl_band.o 00:04:56.530 CC lib/nvmf/auth.o 00:04:56.530 CC lib/ftl/ftl_band_ops.o 00:04:56.530 CC lib/iscsi/conn.o 00:04:56.788 CC lib/iscsi/init_grp.o 00:04:56.788 CC lib/ftl/ftl_writer.o 00:04:56.788 CC lib/ftl/ftl_rq.o 00:04:57.046 CC lib/iscsi/iscsi.o 00:04:57.046 CC lib/iscsi/md5.o 00:04:57.046 CC lib/vhost/vhost.o 00:04:57.046 CC lib/vhost/vhost_rpc.o 00:04:57.304 CC lib/iscsi/param.o 00:04:57.304 CC lib/iscsi/portal_grp.o 00:04:57.562 CC lib/iscsi/tgt_node.o 00:04:57.562 CC lib/ftl/ftl_reloc.o 00:04:57.562 CC lib/iscsi/iscsi_subsystem.o 00:04:57.562 CC lib/iscsi/iscsi_rpc.o 00:04:57.820 CC lib/iscsi/task.o 00:04:57.820 CC lib/vhost/vhost_scsi.o 00:04:57.820 CC lib/vhost/vhost_blk.o 00:04:57.820 CC lib/ftl/ftl_l2p_cache.o 00:04:57.820 CC lib/ftl/ftl_p2l.o 00:04:58.079 CC lib/vhost/rte_vhost_user.o 00:04:58.079 CC lib/ftl/mngt/ftl_mngt.o 00:04:58.079 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:58.337 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:58.337 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:58.337 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:58.337 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:58.595 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:58.595 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:58.595 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:58.595 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:58.853 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:58.853 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:58.853 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:58.853 LIB libspdk_iscsi.a 00:04:58.853 CC lib/ftl/utils/ftl_conf.o 00:04:58.853 CC lib/ftl/utils/ftl_md.o 00:04:58.853 SO libspdk_iscsi.so.8.0 00:04:59.111 LIB libspdk_nvmf.a 00:04:59.111 CC lib/ftl/utils/ftl_mempool.o 00:04:59.111 CC lib/ftl/utils/ftl_bitmap.o 00:04:59.111 CC lib/ftl/utils/ftl_property.o 00:04:59.111 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:59.111 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:59.111 SYMLINK libspdk_iscsi.so 00:04:59.111 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:59.111 SO libspdk_nvmf.so.19.0 00:04:59.111 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:59.111 LIB libspdk_vhost.a 00:04:59.111 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:59.372 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:59.372 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:59.372 SO libspdk_vhost.so.8.0 00:04:59.372 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:59.372 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:59.372 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:59.372 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:59.372 SYMLINK libspdk_vhost.so 00:04:59.372 CC lib/ftl/base/ftl_base_dev.o 00:04:59.372 CC lib/ftl/base/ftl_base_bdev.o 00:04:59.372 CC lib/ftl/ftl_trace.o 00:04:59.640 SYMLINK libspdk_nvmf.so 00:04:59.897 LIB libspdk_ftl.a 00:05:00.156 SO libspdk_ftl.so.9.0 00:05:00.414 SYMLINK libspdk_ftl.so 00:05:00.981 CC module/env_dpdk/env_dpdk_rpc.o 00:05:00.981 CC module/vfu_device/vfu_virtio.o 00:05:00.981 CC module/accel/error/accel_error.o 00:05:00.981 CC module/accel/iaa/accel_iaa.o 00:05:00.981 CC module/sock/posix/posix.o 00:05:00.981 CC module/accel/ioat/accel_ioat.o 00:05:00.981 CC module/blob/bdev/blob_bdev.o 00:05:00.981 CC module/scheduler/dynamic/scheduler_dynamic.o 00:05:00.981 CC module/accel/dsa/accel_dsa.o 00:05:00.981 CC module/keyring/file/keyring.o 00:05:00.981 LIB libspdk_env_dpdk_rpc.a 00:05:00.981 SO libspdk_env_dpdk_rpc.so.6.0 00:05:00.981 SYMLINK libspdk_env_dpdk_rpc.so 00:05:00.981 CC module/accel/ioat/accel_ioat_rpc.o 00:05:00.981 CC module/keyring/file/keyring_rpc.o 00:05:01.239 CC module/accel/error/accel_error_rpc.o 00:05:01.239 CC module/accel/dsa/accel_dsa_rpc.o 00:05:01.239 LIB libspdk_scheduler_dynamic.a 00:05:01.239 CC module/accel/iaa/accel_iaa_rpc.o 00:05:01.239 SO libspdk_scheduler_dynamic.so.4.0 00:05:01.239 LIB libspdk_accel_ioat.a 00:05:01.239 LIB libspdk_blob_bdev.a 00:05:01.239 LIB libspdk_keyring_file.a 00:05:01.239 SO libspdk_accel_ioat.so.6.0 00:05:01.239 SYMLINK libspdk_scheduler_dynamic.so 00:05:01.239 SO libspdk_blob_bdev.so.11.0 00:05:01.239 SO libspdk_keyring_file.so.1.0 00:05:01.239 LIB libspdk_accel_error.a 00:05:01.239 LIB libspdk_accel_dsa.a 00:05:01.239 LIB libspdk_accel_iaa.a 00:05:01.239 SO libspdk_accel_error.so.2.0 00:05:01.239 SYMLINK libspdk_accel_ioat.so 00:05:01.239 SO libspdk_accel_dsa.so.5.0 00:05:01.240 SO libspdk_accel_iaa.so.3.0 00:05:01.240 SYMLINK libspdk_blob_bdev.so 00:05:01.240 SYMLINK libspdk_keyring_file.so 00:05:01.497 SYMLINK libspdk_accel_error.so 00:05:01.497 SYMLINK libspdk_accel_iaa.so 00:05:01.497 CC module/vfu_device/vfu_virtio_blk.o 00:05:01.497 SYMLINK libspdk_accel_dsa.so 00:05:01.497 CC module/sock/uring/uring.o 00:05:01.497 CC module/vfu_device/vfu_virtio_scsi.o 00:05:01.497 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:05:01.497 CC module/scheduler/gscheduler/gscheduler.o 00:05:01.497 CC module/keyring/linux/keyring.o 00:05:01.755 CC module/bdev/delay/vbdev_delay.o 00:05:01.755 LIB libspdk_scheduler_dpdk_governor.a 00:05:01.755 CC module/blobfs/bdev/blobfs_bdev.o 00:05:01.755 SO libspdk_scheduler_dpdk_governor.so.4.0 00:05:01.755 CC module/keyring/linux/keyring_rpc.o 00:05:01.755 LIB libspdk_scheduler_gscheduler.a 00:05:01.755 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:05:01.755 SYMLINK libspdk_scheduler_dpdk_governor.so 00:05:01.755 CC module/vfu_device/vfu_virtio_rpc.o 00:05:01.755 SO libspdk_scheduler_gscheduler.so.4.0 00:05:01.755 SYMLINK libspdk_scheduler_gscheduler.so 00:05:01.755 CC module/bdev/delay/vbdev_delay_rpc.o 00:05:01.755 LIB libspdk_keyring_linux.a 00:05:02.033 LIB libspdk_sock_posix.a 00:05:02.033 CC module/bdev/error/vbdev_error.o 00:05:02.033 SO libspdk_keyring_linux.so.1.0 00:05:02.033 SO libspdk_sock_posix.so.6.0 00:05:02.033 LIB libspdk_blobfs_bdev.a 00:05:02.033 LIB libspdk_vfu_device.a 00:05:02.033 SYMLINK libspdk_keyring_linux.so 00:05:02.033 SO libspdk_blobfs_bdev.so.6.0 00:05:02.033 SYMLINK libspdk_sock_posix.so 00:05:02.033 CC module/bdev/error/vbdev_error_rpc.o 00:05:02.033 SO libspdk_vfu_device.so.3.0 00:05:02.033 CC module/bdev/gpt/gpt.o 00:05:02.033 CC module/bdev/lvol/vbdev_lvol.o 00:05:02.033 SYMLINK libspdk_blobfs_bdev.so 00:05:02.033 LIB libspdk_bdev_delay.a 00:05:02.033 SYMLINK libspdk_vfu_device.so 00:05:02.033 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:05:02.292 SO libspdk_bdev_delay.so.6.0 00:05:02.292 CC module/bdev/gpt/vbdev_gpt.o 00:05:02.292 LIB libspdk_bdev_error.a 00:05:02.292 CC module/bdev/malloc/bdev_malloc.o 00:05:02.292 SYMLINK libspdk_bdev_delay.so 00:05:02.292 CC module/bdev/nvme/bdev_nvme.o 00:05:02.292 CC module/bdev/null/bdev_null.o 00:05:02.292 SO libspdk_bdev_error.so.6.0 00:05:02.292 CC module/bdev/null/bdev_null_rpc.o 00:05:02.292 SYMLINK libspdk_bdev_error.so 00:05:02.292 LIB libspdk_sock_uring.a 00:05:02.292 CC module/bdev/malloc/bdev_malloc_rpc.o 00:05:02.550 SO libspdk_sock_uring.so.5.0 00:05:02.550 CC module/bdev/nvme/bdev_nvme_rpc.o 00:05:02.550 SYMLINK libspdk_sock_uring.so 00:05:02.550 CC module/bdev/nvme/nvme_rpc.o 00:05:02.550 LIB libspdk_bdev_gpt.a 00:05:02.550 CC module/bdev/passthru/vbdev_passthru.o 00:05:02.550 LIB libspdk_bdev_null.a 00:05:02.550 CC module/bdev/nvme/bdev_mdns_client.o 00:05:02.550 SO libspdk_bdev_gpt.so.6.0 00:05:02.550 CC module/bdev/nvme/vbdev_opal.o 00:05:02.550 SO libspdk_bdev_null.so.6.0 00:05:02.809 SYMLINK libspdk_bdev_gpt.so 00:05:02.809 SYMLINK libspdk_bdev_null.so 00:05:02.809 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:05:02.809 LIB libspdk_bdev_malloc.a 00:05:02.809 LIB libspdk_bdev_lvol.a 00:05:02.809 SO libspdk_bdev_malloc.so.6.0 00:05:02.809 CC module/bdev/nvme/vbdev_opal_rpc.o 00:05:02.809 SO libspdk_bdev_lvol.so.6.0 00:05:02.809 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:05:02.809 SYMLINK libspdk_bdev_malloc.so 00:05:02.809 SYMLINK libspdk_bdev_lvol.so 00:05:02.809 LIB libspdk_bdev_passthru.a 00:05:03.067 CC module/bdev/raid/bdev_raid.o 00:05:03.067 SO libspdk_bdev_passthru.so.6.0 00:05:03.067 CC module/bdev/split/vbdev_split.o 00:05:03.067 SYMLINK libspdk_bdev_passthru.so 00:05:03.067 CC module/bdev/raid/bdev_raid_rpc.o 00:05:03.067 CC module/bdev/raid/bdev_raid_sb.o 00:05:03.067 CC module/bdev/zone_block/vbdev_zone_block.o 00:05:03.068 CC module/bdev/uring/bdev_uring.o 00:05:03.068 CC module/bdev/aio/bdev_aio.o 00:05:03.325 CC module/bdev/ftl/bdev_ftl.o 00:05:03.325 CC module/bdev/split/vbdev_split_rpc.o 00:05:03.325 CC module/bdev/aio/bdev_aio_rpc.o 00:05:03.325 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:05:03.584 LIB libspdk_bdev_split.a 00:05:03.584 CC module/bdev/raid/raid0.o 00:05:03.584 LIB libspdk_bdev_zone_block.a 00:05:03.584 LIB libspdk_bdev_aio.a 00:05:03.584 CC module/bdev/uring/bdev_uring_rpc.o 00:05:03.584 SO libspdk_bdev_split.so.6.0 00:05:03.584 SO libspdk_bdev_zone_block.so.6.0 00:05:03.584 SO libspdk_bdev_aio.so.6.0 00:05:03.584 CC module/bdev/ftl/bdev_ftl_rpc.o 00:05:03.584 CC module/bdev/iscsi/bdev_iscsi.o 00:05:03.584 SYMLINK libspdk_bdev_split.so 00:05:03.584 SYMLINK libspdk_bdev_zone_block.so 00:05:03.584 CC module/bdev/raid/raid1.o 00:05:03.584 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:05:03.584 SYMLINK libspdk_bdev_aio.so 00:05:03.584 CC module/bdev/raid/concat.o 00:05:03.584 CC module/bdev/virtio/bdev_virtio_scsi.o 00:05:03.842 LIB libspdk_bdev_uring.a 00:05:03.842 SO libspdk_bdev_uring.so.6.0 00:05:03.842 LIB libspdk_bdev_ftl.a 00:05:03.842 CC module/bdev/virtio/bdev_virtio_blk.o 00:05:03.842 CC module/bdev/virtio/bdev_virtio_rpc.o 00:05:03.842 SYMLINK libspdk_bdev_uring.so 00:05:03.842 SO libspdk_bdev_ftl.so.6.0 00:05:03.842 SYMLINK libspdk_bdev_ftl.so 00:05:04.099 LIB libspdk_bdev_iscsi.a 00:05:04.099 SO libspdk_bdev_iscsi.so.6.0 00:05:04.099 SYMLINK libspdk_bdev_iscsi.so 00:05:04.358 LIB libspdk_bdev_raid.a 00:05:04.358 SO libspdk_bdev_raid.so.6.0 00:05:04.358 LIB libspdk_bdev_virtio.a 00:05:04.358 SO libspdk_bdev_virtio.so.6.0 00:05:04.358 SYMLINK libspdk_bdev_raid.so 00:05:04.616 SYMLINK libspdk_bdev_virtio.so 00:05:05.182 LIB libspdk_bdev_nvme.a 00:05:05.182 SO libspdk_bdev_nvme.so.7.0 00:05:05.440 SYMLINK libspdk_bdev_nvme.so 00:05:06.006 CC module/event/subsystems/sock/sock.o 00:05:06.006 CC module/event/subsystems/keyring/keyring.o 00:05:06.006 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:05:06.006 CC module/event/subsystems/vmd/vmd.o 00:05:06.006 CC module/event/subsystems/iobuf/iobuf.o 00:05:06.006 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:05:06.006 CC module/event/subsystems/vmd/vmd_rpc.o 00:05:06.006 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:05:06.006 CC module/event/subsystems/scheduler/scheduler.o 00:05:06.006 LIB libspdk_event_keyring.a 00:05:06.006 LIB libspdk_event_vhost_blk.a 00:05:06.006 LIB libspdk_event_sock.a 00:05:06.006 LIB libspdk_event_scheduler.a 00:05:06.006 SO libspdk_event_keyring.so.1.0 00:05:06.006 LIB libspdk_event_vmd.a 00:05:06.006 LIB libspdk_event_vfu_tgt.a 00:05:06.006 SO libspdk_event_vhost_blk.so.3.0 00:05:06.264 SO libspdk_event_sock.so.5.0 00:05:06.264 LIB libspdk_event_iobuf.a 00:05:06.264 SO libspdk_event_scheduler.so.4.0 00:05:06.264 SO libspdk_event_vmd.so.6.0 00:05:06.264 SO libspdk_event_vfu_tgt.so.3.0 00:05:06.264 SYMLINK libspdk_event_vhost_blk.so 00:05:06.264 SYMLINK libspdk_event_keyring.so 00:05:06.264 SO libspdk_event_iobuf.so.3.0 00:05:06.264 SYMLINK libspdk_event_sock.so 00:05:06.264 SYMLINK libspdk_event_vfu_tgt.so 00:05:06.264 SYMLINK libspdk_event_vmd.so 00:05:06.264 SYMLINK libspdk_event_scheduler.so 00:05:06.264 SYMLINK libspdk_event_iobuf.so 00:05:06.522 CC module/event/subsystems/accel/accel.o 00:05:06.780 LIB libspdk_event_accel.a 00:05:06.780 SO libspdk_event_accel.so.6.0 00:05:06.780 SYMLINK libspdk_event_accel.so 00:05:07.038 CC module/event/subsystems/bdev/bdev.o 00:05:07.296 LIB libspdk_event_bdev.a 00:05:07.296 SO libspdk_event_bdev.so.6.0 00:05:07.296 SYMLINK libspdk_event_bdev.so 00:05:07.553 CC module/event/subsystems/nbd/nbd.o 00:05:07.553 CC module/event/subsystems/ublk/ublk.o 00:05:07.553 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:07.553 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:07.553 CC module/event/subsystems/scsi/scsi.o 00:05:07.811 LIB libspdk_event_nbd.a 00:05:07.811 LIB libspdk_event_ublk.a 00:05:07.811 SO libspdk_event_nbd.so.6.0 00:05:07.811 LIB libspdk_event_scsi.a 00:05:07.811 SO libspdk_event_ublk.so.3.0 00:05:07.811 SO libspdk_event_scsi.so.6.0 00:05:07.811 SYMLINK libspdk_event_nbd.so 00:05:07.811 SYMLINK libspdk_event_ublk.so 00:05:07.811 SYMLINK libspdk_event_scsi.so 00:05:07.811 LIB libspdk_event_nvmf.a 00:05:08.070 SO libspdk_event_nvmf.so.6.0 00:05:08.070 SYMLINK libspdk_event_nvmf.so 00:05:08.070 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:05:08.070 CC module/event/subsystems/iscsi/iscsi.o 00:05:08.329 LIB libspdk_event_vhost_scsi.a 00:05:08.329 LIB libspdk_event_iscsi.a 00:05:08.329 SO libspdk_event_vhost_scsi.so.3.0 00:05:08.329 SO libspdk_event_iscsi.so.6.0 00:05:08.329 SYMLINK libspdk_event_vhost_scsi.so 00:05:08.329 SYMLINK libspdk_event_iscsi.so 00:05:08.588 SO libspdk.so.6.0 00:05:08.588 SYMLINK libspdk.so 00:05:08.846 CC app/spdk_lspci/spdk_lspci.o 00:05:08.846 CXX app/trace/trace.o 00:05:08.846 CC app/trace_record/trace_record.o 00:05:08.846 CC app/iscsi_tgt/iscsi_tgt.o 00:05:08.846 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:08.846 CC app/nvmf_tgt/nvmf_main.o 00:05:08.846 CC app/spdk_tgt/spdk_tgt.o 00:05:08.846 CC examples/ioat/perf/perf.o 00:05:09.104 CC examples/util/zipf/zipf.o 00:05:09.104 CC test/thread/poller_perf/poller_perf.o 00:05:09.104 LINK spdk_lspci 00:05:09.104 LINK interrupt_tgt 00:05:09.104 LINK nvmf_tgt 00:05:09.104 LINK zipf 00:05:09.104 LINK iscsi_tgt 00:05:09.104 LINK poller_perf 00:05:09.362 LINK spdk_trace_record 00:05:09.362 LINK spdk_tgt 00:05:09.362 LINK ioat_perf 00:05:09.362 CC app/spdk_nvme_perf/perf.o 00:05:09.362 LINK spdk_trace 00:05:09.362 CC app/spdk_nvme_identify/identify.o 00:05:09.648 CC examples/ioat/verify/verify.o 00:05:09.648 CC app/spdk_nvme_discover/discovery_aer.o 00:05:09.648 CC app/spdk_top/spdk_top.o 00:05:09.648 CC test/dma/test_dma/test_dma.o 00:05:09.648 CC examples/sock/hello_world/hello_sock.o 00:05:09.648 CC test/app/bdev_svc/bdev_svc.o 00:05:09.648 CC app/spdk_dd/spdk_dd.o 00:05:09.648 CC examples/thread/thread/thread_ex.o 00:05:09.926 LINK spdk_nvme_discover 00:05:09.926 LINK verify 00:05:09.926 LINK bdev_svc 00:05:09.926 LINK hello_sock 00:05:09.926 LINK thread 00:05:10.185 TEST_HEADER include/spdk/accel.h 00:05:10.185 TEST_HEADER include/spdk/accel_module.h 00:05:10.185 TEST_HEADER include/spdk/assert.h 00:05:10.186 TEST_HEADER include/spdk/barrier.h 00:05:10.186 TEST_HEADER include/spdk/base64.h 00:05:10.186 TEST_HEADER include/spdk/bdev.h 00:05:10.186 TEST_HEADER include/spdk/bdev_module.h 00:05:10.186 TEST_HEADER include/spdk/bdev_zone.h 00:05:10.186 TEST_HEADER include/spdk/bit_array.h 00:05:10.186 TEST_HEADER include/spdk/bit_pool.h 00:05:10.186 TEST_HEADER include/spdk/blob_bdev.h 00:05:10.186 LINK test_dma 00:05:10.186 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:10.186 TEST_HEADER include/spdk/blobfs.h 00:05:10.186 TEST_HEADER include/spdk/blob.h 00:05:10.186 TEST_HEADER include/spdk/conf.h 00:05:10.186 TEST_HEADER include/spdk/config.h 00:05:10.186 TEST_HEADER include/spdk/cpuset.h 00:05:10.186 TEST_HEADER include/spdk/crc16.h 00:05:10.186 TEST_HEADER include/spdk/crc32.h 00:05:10.186 TEST_HEADER include/spdk/crc64.h 00:05:10.186 TEST_HEADER include/spdk/dif.h 00:05:10.186 TEST_HEADER include/spdk/dma.h 00:05:10.186 TEST_HEADER include/spdk/endian.h 00:05:10.186 TEST_HEADER include/spdk/env_dpdk.h 00:05:10.186 TEST_HEADER include/spdk/env.h 00:05:10.186 TEST_HEADER include/spdk/event.h 00:05:10.186 TEST_HEADER include/spdk/fd_group.h 00:05:10.186 TEST_HEADER include/spdk/fd.h 00:05:10.186 TEST_HEADER include/spdk/file.h 00:05:10.186 TEST_HEADER include/spdk/ftl.h 00:05:10.186 TEST_HEADER include/spdk/gpt_spec.h 00:05:10.186 TEST_HEADER include/spdk/hexlify.h 00:05:10.186 TEST_HEADER include/spdk/histogram_data.h 00:05:10.186 TEST_HEADER include/spdk/idxd.h 00:05:10.186 TEST_HEADER include/spdk/idxd_spec.h 00:05:10.186 TEST_HEADER include/spdk/init.h 00:05:10.186 TEST_HEADER include/spdk/ioat.h 00:05:10.186 TEST_HEADER include/spdk/ioat_spec.h 00:05:10.186 TEST_HEADER include/spdk/iscsi_spec.h 00:05:10.186 TEST_HEADER include/spdk/json.h 00:05:10.186 TEST_HEADER include/spdk/jsonrpc.h 00:05:10.186 TEST_HEADER include/spdk/keyring.h 00:05:10.186 TEST_HEADER include/spdk/keyring_module.h 00:05:10.186 TEST_HEADER include/spdk/likely.h 00:05:10.186 TEST_HEADER include/spdk/log.h 00:05:10.186 TEST_HEADER include/spdk/lvol.h 00:05:10.186 TEST_HEADER include/spdk/memory.h 00:05:10.186 TEST_HEADER include/spdk/mmio.h 00:05:10.186 TEST_HEADER include/spdk/nbd.h 00:05:10.186 TEST_HEADER include/spdk/net.h 00:05:10.186 TEST_HEADER include/spdk/notify.h 00:05:10.186 TEST_HEADER include/spdk/nvme.h 00:05:10.186 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:10.186 TEST_HEADER include/spdk/nvme_intel.h 00:05:10.186 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:10.186 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:10.186 TEST_HEADER include/spdk/nvme_spec.h 00:05:10.186 TEST_HEADER include/spdk/nvme_zns.h 00:05:10.186 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:10.186 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:10.186 TEST_HEADER include/spdk/nvmf.h 00:05:10.186 TEST_HEADER include/spdk/nvmf_spec.h 00:05:10.186 TEST_HEADER include/spdk/nvmf_transport.h 00:05:10.186 TEST_HEADER include/spdk/opal.h 00:05:10.186 TEST_HEADER include/spdk/opal_spec.h 00:05:10.186 TEST_HEADER include/spdk/pci_ids.h 00:05:10.186 TEST_HEADER include/spdk/pipe.h 00:05:10.186 TEST_HEADER include/spdk/queue.h 00:05:10.186 TEST_HEADER include/spdk/reduce.h 00:05:10.186 TEST_HEADER include/spdk/rpc.h 00:05:10.186 TEST_HEADER include/spdk/scheduler.h 00:05:10.186 TEST_HEADER include/spdk/scsi.h 00:05:10.186 TEST_HEADER include/spdk/scsi_spec.h 00:05:10.186 TEST_HEADER include/spdk/sock.h 00:05:10.186 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:10.186 TEST_HEADER include/spdk/stdinc.h 00:05:10.186 TEST_HEADER include/spdk/string.h 00:05:10.186 TEST_HEADER include/spdk/thread.h 00:05:10.186 TEST_HEADER include/spdk/trace.h 00:05:10.186 TEST_HEADER include/spdk/trace_parser.h 00:05:10.186 TEST_HEADER include/spdk/tree.h 00:05:10.186 TEST_HEADER include/spdk/ublk.h 00:05:10.186 TEST_HEADER include/spdk/util.h 00:05:10.186 TEST_HEADER include/spdk/uuid.h 00:05:10.186 TEST_HEADER include/spdk/version.h 00:05:10.186 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:10.186 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:10.186 TEST_HEADER include/spdk/vhost.h 00:05:10.186 CC test/app/histogram_perf/histogram_perf.o 00:05:10.186 TEST_HEADER include/spdk/vmd.h 00:05:10.186 TEST_HEADER include/spdk/xor.h 00:05:10.186 TEST_HEADER include/spdk/zipf.h 00:05:10.186 LINK spdk_dd 00:05:10.186 CXX test/cpp_headers/accel.o 00:05:10.444 LINK histogram_perf 00:05:10.444 CXX test/cpp_headers/accel_module.o 00:05:10.444 CXX test/cpp_headers/assert.o 00:05:10.444 LINK spdk_nvme_perf 00:05:10.702 CC examples/vmd/lsvmd/lsvmd.o 00:05:10.702 CC examples/idxd/perf/perf.o 00:05:10.702 LINK spdk_nvme_identify 00:05:10.702 CXX test/cpp_headers/barrier.o 00:05:10.702 LINK nvme_fuzz 00:05:10.702 LINK lsvmd 00:05:10.702 CXX test/cpp_headers/base64.o 00:05:10.702 LINK spdk_top 00:05:10.960 CC test/event/event_perf/event_perf.o 00:05:10.960 CXX test/cpp_headers/bdev.o 00:05:10.960 CC test/env/mem_callbacks/mem_callbacks.o 00:05:10.960 CC test/env/vtophys/vtophys.o 00:05:10.960 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:10.960 CC test/event/reactor/reactor.o 00:05:10.960 CC examples/vmd/led/led.o 00:05:10.960 LINK idxd_perf 00:05:11.218 LINK event_perf 00:05:11.218 CXX test/cpp_headers/bdev_module.o 00:05:11.218 CC app/fio/nvme/fio_plugin.o 00:05:11.218 LINK vtophys 00:05:11.218 LINK reactor 00:05:11.218 LINK env_dpdk_post_init 00:05:11.218 LINK led 00:05:11.218 CXX test/cpp_headers/bdev_zone.o 00:05:11.475 CC examples/accel/perf/accel_perf.o 00:05:11.475 CXX test/cpp_headers/bit_array.o 00:05:11.475 CC test/event/reactor_perf/reactor_perf.o 00:05:11.475 CC test/event/app_repeat/app_repeat.o 00:05:11.475 CC test/rpc_client/rpc_client_test.o 00:05:11.475 CC examples/blob/hello_world/hello_blob.o 00:05:11.733 LINK mem_callbacks 00:05:11.733 CC test/nvme/aer/aer.o 00:05:11.733 CXX test/cpp_headers/bit_pool.o 00:05:11.733 LINK app_repeat 00:05:11.733 LINK reactor_perf 00:05:11.733 LINK rpc_client_test 00:05:11.992 CXX test/cpp_headers/blob_bdev.o 00:05:11.992 LINK hello_blob 00:05:11.992 CC test/env/memory/memory_ut.o 00:05:11.992 CXX test/cpp_headers/blobfs_bdev.o 00:05:11.992 LINK spdk_nvme 00:05:11.992 CXX test/cpp_headers/blobfs.o 00:05:11.992 LINK aer 00:05:11.992 CC test/event/scheduler/scheduler.o 00:05:12.250 LINK accel_perf 00:05:12.250 CXX test/cpp_headers/blob.o 00:05:12.250 CC app/fio/bdev/fio_plugin.o 00:05:12.250 CC examples/blob/cli/blobcli.o 00:05:12.250 CC test/env/pci/pci_ut.o 00:05:12.250 CC test/nvme/reset/reset.o 00:05:12.250 CXX test/cpp_headers/conf.o 00:05:12.509 LINK scheduler 00:05:12.509 CC test/accel/dif/dif.o 00:05:12.509 CXX test/cpp_headers/config.o 00:05:12.509 CC test/blobfs/mkfs/mkfs.o 00:05:12.509 CXX test/cpp_headers/cpuset.o 00:05:12.509 LINK iscsi_fuzz 00:05:12.767 CXX test/cpp_headers/crc16.o 00:05:12.767 LINK reset 00:05:12.767 LINK pci_ut 00:05:12.767 LINK mkfs 00:05:12.767 LINK blobcli 00:05:12.767 LINK spdk_bdev 00:05:12.767 CXX test/cpp_headers/crc32.o 00:05:13.025 LINK dif 00:05:13.025 CC test/nvme/sgl/sgl.o 00:05:13.025 CXX test/cpp_headers/crc64.o 00:05:13.025 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:13.025 CXX test/cpp_headers/dif.o 00:05:13.025 CC test/lvol/esnap/esnap.o 00:05:13.284 CC app/vhost/vhost.o 00:05:13.284 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:13.284 CXX test/cpp_headers/dma.o 00:05:13.284 CC examples/nvme/hello_world/hello_world.o 00:05:13.284 LINK memory_ut 00:05:13.284 LINK sgl 00:05:13.284 CC examples/bdev/hello_world/hello_bdev.o 00:05:13.284 LINK vhost 00:05:13.284 CC examples/nvme/reconnect/reconnect.o 00:05:13.542 CXX test/cpp_headers/endian.o 00:05:13.542 CC examples/bdev/bdevperf/bdevperf.o 00:05:13.542 LINK hello_world 00:05:13.542 CXX test/cpp_headers/env_dpdk.o 00:05:13.542 LINK hello_bdev 00:05:13.542 CC test/nvme/e2edp/nvme_dp.o 00:05:13.542 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:13.799 CC test/nvme/overhead/overhead.o 00:05:13.799 LINK vhost_fuzz 00:05:13.799 CXX test/cpp_headers/env.o 00:05:13.799 LINK reconnect 00:05:13.799 CC examples/nvme/arbitration/arbitration.o 00:05:14.057 CC test/app/jsoncat/jsoncat.o 00:05:14.057 CXX test/cpp_headers/event.o 00:05:14.057 LINK nvme_dp 00:05:14.057 LINK overhead 00:05:14.057 CC examples/nvme/hotplug/hotplug.o 00:05:14.057 LINK jsoncat 00:05:14.315 CXX test/cpp_headers/fd_group.o 00:05:14.315 CC test/bdev/bdevio/bdevio.o 00:05:14.315 LINK arbitration 00:05:14.315 CC test/app/stub/stub.o 00:05:14.315 LINK nvme_manage 00:05:14.315 CXX test/cpp_headers/fd.o 00:05:14.315 LINK hotplug 00:05:14.574 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:14.574 LINK bdevperf 00:05:14.574 CC test/nvme/err_injection/err_injection.o 00:05:14.574 LINK stub 00:05:14.574 CXX test/cpp_headers/file.o 00:05:14.574 CC test/nvme/startup/startup.o 00:05:14.574 CC examples/nvme/abort/abort.o 00:05:14.574 LINK bdevio 00:05:14.841 LINK cmb_copy 00:05:14.841 LINK err_injection 00:05:14.841 CXX test/cpp_headers/ftl.o 00:05:14.841 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:14.841 CXX test/cpp_headers/gpt_spec.o 00:05:14.841 LINK startup 00:05:14.841 CC test/nvme/reserve/reserve.o 00:05:14.841 CXX test/cpp_headers/hexlify.o 00:05:14.841 LINK pmr_persistence 00:05:15.113 CC test/nvme/simple_copy/simple_copy.o 00:05:15.113 CC test/nvme/connect_stress/connect_stress.o 00:05:15.113 CC test/nvme/compliance/nvme_compliance.o 00:05:15.113 CC test/nvme/boot_partition/boot_partition.o 00:05:15.113 CC test/nvme/fused_ordering/fused_ordering.o 00:05:15.113 LINK abort 00:05:15.113 LINK reserve 00:05:15.113 CXX test/cpp_headers/histogram_data.o 00:05:15.371 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:15.371 LINK connect_stress 00:05:15.371 LINK boot_partition 00:05:15.371 CXX test/cpp_headers/idxd.o 00:05:15.371 LINK simple_copy 00:05:15.371 LINK fused_ordering 00:05:15.371 CC test/nvme/fdp/fdp.o 00:05:15.371 LINK doorbell_aers 00:05:15.371 CXX test/cpp_headers/idxd_spec.o 00:05:15.630 CXX test/cpp_headers/init.o 00:05:15.630 LINK nvme_compliance 00:05:15.630 CXX test/cpp_headers/ioat.o 00:05:15.630 CXX test/cpp_headers/ioat_spec.o 00:05:15.630 CC examples/nvmf/nvmf/nvmf.o 00:05:15.630 CC test/nvme/cuse/cuse.o 00:05:15.630 CXX test/cpp_headers/iscsi_spec.o 00:05:15.630 CXX test/cpp_headers/json.o 00:05:15.630 CXX test/cpp_headers/jsonrpc.o 00:05:15.630 CXX test/cpp_headers/keyring.o 00:05:15.630 CXX test/cpp_headers/keyring_module.o 00:05:15.630 CXX test/cpp_headers/likely.o 00:05:15.889 CXX test/cpp_headers/log.o 00:05:15.889 CXX test/cpp_headers/lvol.o 00:05:15.889 LINK fdp 00:05:15.889 CXX test/cpp_headers/memory.o 00:05:15.889 CXX test/cpp_headers/mmio.o 00:05:15.889 CXX test/cpp_headers/nbd.o 00:05:15.889 LINK nvmf 00:05:15.889 CXX test/cpp_headers/net.o 00:05:15.889 CXX test/cpp_headers/notify.o 00:05:16.147 CXX test/cpp_headers/nvme.o 00:05:16.147 CXX test/cpp_headers/nvme_intel.o 00:05:16.147 CXX test/cpp_headers/nvme_ocssd.o 00:05:16.147 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:16.147 CXX test/cpp_headers/nvme_spec.o 00:05:16.147 CXX test/cpp_headers/nvme_zns.o 00:05:16.147 CXX test/cpp_headers/nvmf_cmd.o 00:05:16.147 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:16.147 CXX test/cpp_headers/nvmf.o 00:05:16.147 CXX test/cpp_headers/nvmf_spec.o 00:05:16.147 CXX test/cpp_headers/nvmf_transport.o 00:05:16.147 CXX test/cpp_headers/opal.o 00:05:16.418 CXX test/cpp_headers/opal_spec.o 00:05:16.418 CXX test/cpp_headers/pci_ids.o 00:05:16.418 CXX test/cpp_headers/pipe.o 00:05:16.418 CXX test/cpp_headers/queue.o 00:05:16.418 CXX test/cpp_headers/reduce.o 00:05:16.418 CXX test/cpp_headers/rpc.o 00:05:16.418 CXX test/cpp_headers/scheduler.o 00:05:16.418 CXX test/cpp_headers/scsi.o 00:05:16.418 CXX test/cpp_headers/scsi_spec.o 00:05:16.418 CXX test/cpp_headers/sock.o 00:05:16.418 CXX test/cpp_headers/stdinc.o 00:05:16.418 CXX test/cpp_headers/string.o 00:05:16.682 CXX test/cpp_headers/thread.o 00:05:16.682 CXX test/cpp_headers/trace.o 00:05:16.682 CXX test/cpp_headers/trace_parser.o 00:05:16.682 CXX test/cpp_headers/tree.o 00:05:16.682 CXX test/cpp_headers/ublk.o 00:05:16.682 CXX test/cpp_headers/util.o 00:05:16.682 CXX test/cpp_headers/uuid.o 00:05:16.682 CXX test/cpp_headers/version.o 00:05:16.682 CXX test/cpp_headers/vfio_user_pci.o 00:05:16.682 CXX test/cpp_headers/vfio_user_spec.o 00:05:16.682 CXX test/cpp_headers/vhost.o 00:05:16.682 CXX test/cpp_headers/vmd.o 00:05:16.941 CXX test/cpp_headers/xor.o 00:05:16.941 CXX test/cpp_headers/zipf.o 00:05:17.199 LINK cuse 00:05:20.504 LINK esnap 00:05:21.071 00:05:21.071 real 1m20.007s 00:05:21.071 user 7m39.811s 00:05:21.071 sys 1m47.914s 00:05:21.071 ************************************ 00:05:21.071 END TEST make 00:05:21.071 ************************************ 00:05:21.071 08:47:27 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:05:21.071 08:47:27 make -- common/autotest_common.sh@10 -- $ set +x 00:05:21.071 08:47:27 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:21.071 08:47:27 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:21.071 08:47:27 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:21.071 08:47:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:21.071 08:47:27 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:05:21.071 08:47:27 -- pm/common@44 -- $ pid=5146 00:05:21.071 08:47:27 -- pm/common@50 -- $ kill -TERM 5146 00:05:21.071 08:47:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:21.071 08:47:27 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:05:21.071 08:47:27 -- pm/common@44 -- $ pid=5148 00:05:21.071 08:47:27 -- pm/common@50 -- $ kill -TERM 5148 00:05:21.071 08:47:28 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:21.071 08:47:28 -- nvmf/common.sh@7 -- # uname -s 00:05:21.071 08:47:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:21.071 08:47:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:21.071 08:47:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:21.071 08:47:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:21.071 08:47:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:21.071 08:47:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:21.071 08:47:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:21.071 08:47:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:21.071 08:47:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:21.071 08:47:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:21.071 08:47:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:05:21.071 08:47:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:05:21.071 08:47:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:21.071 08:47:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:21.071 08:47:28 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:05:21.071 08:47:28 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:21.071 08:47:28 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:21.071 08:47:28 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:21.071 08:47:28 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:21.071 08:47:28 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:21.071 08:47:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.072 08:47:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.072 08:47:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.072 08:47:28 -- paths/export.sh@5 -- # export PATH 00:05:21.072 08:47:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.072 08:47:28 -- nvmf/common.sh@47 -- # : 0 00:05:21.072 08:47:28 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:21.072 08:47:28 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:21.072 08:47:28 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:21.072 08:47:28 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:21.072 08:47:28 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:21.072 08:47:28 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:21.072 08:47:28 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:21.072 08:47:28 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:21.072 08:47:28 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:21.072 08:47:28 -- spdk/autotest.sh@32 -- # uname -s 00:05:21.072 08:47:28 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:21.072 08:47:28 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:21.072 08:47:28 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:21.072 08:47:28 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:05:21.072 08:47:28 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:21.072 08:47:28 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:21.072 08:47:28 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:21.072 08:47:28 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:21.072 08:47:28 -- spdk/autotest.sh@48 -- # udevadm_pid=53466 00:05:21.072 08:47:28 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:21.072 08:47:28 -- pm/common@17 -- # local monitor 00:05:21.072 08:47:28 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:21.072 08:47:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:21.072 08:47:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:21.072 08:47:28 -- pm/common@25 -- # sleep 1 00:05:21.072 08:47:28 -- pm/common@21 -- # date +%s 00:05:21.072 08:47:28 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721897248 00:05:21.072 08:47:28 -- pm/common@21 -- # date +%s 00:05:21.072 08:47:28 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721897248 00:05:21.072 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721897248_collect-vmstat.pm.log 00:05:21.072 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721897248_collect-cpu-load.pm.log 00:05:22.006 08:47:29 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:22.006 08:47:29 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:22.006 08:47:29 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:22.006 08:47:29 -- common/autotest_common.sh@10 -- # set +x 00:05:22.006 08:47:29 -- spdk/autotest.sh@59 -- # create_test_list 00:05:22.006 08:47:29 -- common/autotest_common.sh@748 -- # xtrace_disable 00:05:22.006 08:47:29 -- common/autotest_common.sh@10 -- # set +x 00:05:22.265 08:47:29 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:05:22.265 08:47:29 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:05:22.265 08:47:29 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:05:22.265 08:47:29 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:05:22.265 08:47:29 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:05:22.265 08:47:29 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:22.265 08:47:29 -- common/autotest_common.sh@1455 -- # uname 00:05:22.266 08:47:29 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:05:22.266 08:47:29 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:22.266 08:47:29 -- common/autotest_common.sh@1475 -- # uname 00:05:22.266 08:47:29 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:05:22.266 08:47:29 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:05:22.266 08:47:29 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:05:22.266 08:47:29 -- spdk/autotest.sh@72 -- # hash lcov 00:05:22.266 08:47:29 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:05:22.266 08:47:29 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:05:22.266 --rc lcov_branch_coverage=1 00:05:22.266 --rc lcov_function_coverage=1 00:05:22.266 --rc genhtml_branch_coverage=1 00:05:22.266 --rc genhtml_function_coverage=1 00:05:22.266 --rc genhtml_legend=1 00:05:22.266 --rc geninfo_all_blocks=1 00:05:22.266 ' 00:05:22.266 08:47:29 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:05:22.266 --rc lcov_branch_coverage=1 00:05:22.266 --rc lcov_function_coverage=1 00:05:22.266 --rc genhtml_branch_coverage=1 00:05:22.266 --rc genhtml_function_coverage=1 00:05:22.266 --rc genhtml_legend=1 00:05:22.266 --rc geninfo_all_blocks=1 00:05:22.266 ' 00:05:22.266 08:47:29 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:05:22.266 --rc lcov_branch_coverage=1 00:05:22.266 --rc lcov_function_coverage=1 00:05:22.266 --rc genhtml_branch_coverage=1 00:05:22.266 --rc genhtml_function_coverage=1 00:05:22.266 --rc genhtml_legend=1 00:05:22.266 --rc geninfo_all_blocks=1 00:05:22.266 --no-external' 00:05:22.266 08:47:29 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:05:22.266 --rc lcov_branch_coverage=1 00:05:22.266 --rc lcov_function_coverage=1 00:05:22.266 --rc genhtml_branch_coverage=1 00:05:22.266 --rc genhtml_function_coverage=1 00:05:22.266 --rc genhtml_legend=1 00:05:22.266 --rc geninfo_all_blocks=1 00:05:22.266 --no-external' 00:05:22.266 08:47:29 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:05:22.266 lcov: LCOV version 1.14 00:05:22.266 08:47:29 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:40.369 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:40.369 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:52.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:05:52.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:05:52.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:05:52.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:05:52.573 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:05:52.574 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:05:52.574 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:05:52.574 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:05:52.574 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:05:52.574 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:05:52.574 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:05:52.574 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:05:52.574 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:05:52.574 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:05:52.574 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:05:52.574 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:05:52.574 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:05:52.574 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:05:52.574 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:05:52.574 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:05:52.574 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:05:52.574 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:05:52.574 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:05:52.574 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:05:52.574 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:05:52.574 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:05:52.574 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:05:52.574 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:05:52.574 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:05:52.574 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:05:52.574 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:05:52.574 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:05:52.574 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:05:52.574 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:05:52.574 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:05:52.574 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:05:52.574 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:05:52.574 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:05:52.574 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:05:52.574 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:05:52.574 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:05:52.574 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:05:52.574 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:05:52.574 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:05:52.574 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:05:52.574 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:05:52.574 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:05:52.574 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:05:52.574 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:05:52.574 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:05:52.574 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:05:52.574 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:05:52.574 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:05:52.574 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:05:52.574 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:05:52.574 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:05:52.574 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:05:52.574 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:05:52.574 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:05:52.574 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:05:52.574 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:05:52.574 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:05:52.574 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:05:52.574 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:05:52.574 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:05:52.574 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:05:52.574 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:05:52.574 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:05:52.574 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:05:52.574 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:05:52.574 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:05:52.574 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:05:52.574 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:05:52.574 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:05:52.574 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:05:52.574 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:05:52.574 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:05:52.574 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:05:52.574 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:05:52.574 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:05:52.574 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:05:52.574 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:05:52.574 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:05:52.574 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:05:52.574 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:05:52.574 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:05:52.574 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:05:52.574 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:05:52.574 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:05:52.574 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:05:52.574 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:05:52.574 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:05:52.574 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:05:52.574 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:05:52.574 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:05:52.574 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:05:52.574 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:05:52.574 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:05:52.574 /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno:no functions found 00:05:52.574 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno 00:05:52.574 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:05:52.575 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:05:52.575 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:05:52.575 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:05:52.575 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:05:52.575 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:05:52.575 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:05:52.575 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:05:52.575 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:05:52.575 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:05:52.575 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:05:52.575 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:05:52.575 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:05:52.575 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:05:52.575 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:05:52.575 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:05:52.575 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:05:52.575 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:05:52.575 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:05:52.575 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:05:52.575 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:05:52.575 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:05:52.575 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:05:52.575 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:05:52.575 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:05:52.575 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:05:52.575 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:05:52.575 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:05:52.575 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:05:52.575 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:05:52.575 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:05:52.575 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:05:52.575 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:05:52.575 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:05:52.575 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:05:52.575 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:05:52.575 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:05:52.575 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:05:52.575 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:05:52.575 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:05:52.575 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:05:52.575 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:05:52.575 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:05:52.575 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:05:52.575 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:05:52.575 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:05:52.575 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:05:52.575 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:05:52.575 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:05:52.575 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:05:52.575 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:05:52.575 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:05:52.575 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:05:52.575 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:05:52.575 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:05:52.575 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:05:52.575 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:05:52.575 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:05:52.575 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:05:52.575 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:05:52.575 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:05:52.575 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:05:52.575 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:05:52.575 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:05:52.575 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:05:52.575 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:05:52.575 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:05:52.575 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:05:52.575 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:05:52.575 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:05:52.575 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:05:52.575 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:05:52.575 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:05:52.575 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:05:52.575 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:05:52.575 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:05:52.575 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:05:52.575 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:05:55.108 08:48:02 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:05:55.108 08:48:02 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:55.108 08:48:02 -- common/autotest_common.sh@10 -- # set +x 00:05:55.108 08:48:02 -- spdk/autotest.sh@91 -- # rm -f 00:05:55.108 08:48:02 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:55.677 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:55.677 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:55.677 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:55.937 08:48:02 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:05:55.937 08:48:02 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:55.937 08:48:02 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:55.937 08:48:02 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:55.937 08:48:02 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:55.937 08:48:02 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:55.937 08:48:02 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:55.937 08:48:02 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:55.937 08:48:02 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:55.937 08:48:02 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:55.937 08:48:02 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:05:55.937 08:48:02 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:05:55.937 08:48:02 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:55.937 08:48:02 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:55.937 08:48:02 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:55.937 08:48:02 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:05:55.937 08:48:02 -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:05:55.937 08:48:02 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:55.937 08:48:02 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:55.937 08:48:02 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:55.937 08:48:02 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:05:55.937 08:48:02 -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:05:55.937 08:48:02 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:55.937 08:48:02 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:55.937 08:48:02 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:05:55.937 08:48:02 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:55.937 08:48:02 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:55.937 08:48:02 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:05:55.937 08:48:02 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:05:55.937 08:48:02 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:55.937 No valid GPT data, bailing 00:05:55.937 08:48:02 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:55.937 08:48:02 -- scripts/common.sh@391 -- # pt= 00:05:55.937 08:48:02 -- scripts/common.sh@392 -- # return 1 00:05:55.937 08:48:02 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:55.937 1+0 records in 00:05:55.937 1+0 records out 00:05:55.937 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00578896 s, 181 MB/s 00:05:55.937 08:48:02 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:55.937 08:48:02 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:55.937 08:48:02 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:05:55.937 08:48:02 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:05:55.937 08:48:02 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:55.937 No valid GPT data, bailing 00:05:55.937 08:48:02 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:55.937 08:48:02 -- scripts/common.sh@391 -- # pt= 00:05:55.937 08:48:02 -- scripts/common.sh@392 -- # return 1 00:05:55.937 08:48:02 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:55.937 1+0 records in 00:05:55.937 1+0 records out 00:05:55.937 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00479912 s, 218 MB/s 00:05:55.937 08:48:02 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:55.937 08:48:02 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:55.937 08:48:02 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:05:55.937 08:48:02 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:05:55.937 08:48:02 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:55.937 No valid GPT data, bailing 00:05:55.937 08:48:03 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:55.937 08:48:03 -- scripts/common.sh@391 -- # pt= 00:05:55.937 08:48:03 -- scripts/common.sh@392 -- # return 1 00:05:55.937 08:48:03 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:55.937 1+0 records in 00:05:55.937 1+0 records out 00:05:55.937 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00513472 s, 204 MB/s 00:05:55.937 08:48:03 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:55.937 08:48:03 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:55.937 08:48:03 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:05:55.937 08:48:03 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:05:55.937 08:48:03 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:56.196 No valid GPT data, bailing 00:05:56.196 08:48:03 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:56.196 08:48:03 -- scripts/common.sh@391 -- # pt= 00:05:56.196 08:48:03 -- scripts/common.sh@392 -- # return 1 00:05:56.196 08:48:03 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:56.196 1+0 records in 00:05:56.196 1+0 records out 00:05:56.196 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00499711 s, 210 MB/s 00:05:56.196 08:48:03 -- spdk/autotest.sh@118 -- # sync 00:05:56.196 08:48:03 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:56.196 08:48:03 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:56.196 08:48:03 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:58.098 08:48:04 -- spdk/autotest.sh@124 -- # uname -s 00:05:58.098 08:48:05 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:05:58.098 08:48:05 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:58.098 08:48:05 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:58.098 08:48:05 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:58.098 08:48:05 -- common/autotest_common.sh@10 -- # set +x 00:05:58.098 ************************************ 00:05:58.098 START TEST setup.sh 00:05:58.098 ************************************ 00:05:58.098 08:48:05 setup.sh -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:58.098 * Looking for test storage... 00:05:58.098 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:58.098 08:48:05 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:05:58.098 08:48:05 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:05:58.098 08:48:05 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:58.098 08:48:05 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:58.098 08:48:05 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:58.098 08:48:05 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:58.098 ************************************ 00:05:58.098 START TEST acl 00:05:58.098 ************************************ 00:05:58.098 08:48:05 setup.sh.acl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:58.098 * Looking for test storage... 00:05:58.098 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:58.098 08:48:05 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:05:58.098 08:48:05 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:58.098 08:48:05 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:58.098 08:48:05 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:58.098 08:48:05 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:58.098 08:48:05 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:58.098 08:48:05 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:58.098 08:48:05 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:58.098 08:48:05 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:58.098 08:48:05 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:58.098 08:48:05 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:05:58.098 08:48:05 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:05:58.098 08:48:05 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:58.098 08:48:05 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:58.098 08:48:05 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:58.098 08:48:05 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:05:58.098 08:48:05 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:05:58.098 08:48:05 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:58.098 08:48:05 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:58.098 08:48:05 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:58.098 08:48:05 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:05:58.098 08:48:05 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:05:58.098 08:48:05 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:58.098 08:48:05 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:58.098 08:48:05 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:05:58.098 08:48:05 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:05:58.098 08:48:05 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:05:58.098 08:48:05 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:05:58.098 08:48:05 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:05:58.098 08:48:05 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:58.098 08:48:05 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:59.036 08:48:05 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:05:59.036 08:48:05 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:05:59.036 08:48:05 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:59.036 08:48:05 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:05:59.036 08:48:05 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:05:59.036 08:48:05 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:59.613 08:48:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:05:59.613 08:48:06 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:59.613 08:48:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:59.613 Hugepages 00:05:59.613 node hugesize free / total 00:05:59.613 08:48:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:59.613 08:48:06 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:59.613 08:48:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:59.613 00:05:59.613 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:59.613 08:48:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:59.613 08:48:06 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:59.613 08:48:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:59.613 08:48:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:05:59.613 08:48:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:05:59.613 08:48:06 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:59.613 08:48:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:59.884 08:48:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:05:59.884 08:48:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:59.884 08:48:06 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:05:59.884 08:48:06 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:59.885 08:48:06 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:59.885 08:48:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:59.885 08:48:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:05:59.885 08:48:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:59.885 08:48:06 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:59.885 08:48:06 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:59.885 08:48:06 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:59.885 08:48:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:59.885 08:48:06 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:05:59.885 08:48:06 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:05:59.885 08:48:06 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:59.885 08:48:06 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:59.885 08:48:06 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:59.885 ************************************ 00:05:59.885 START TEST denied 00:05:59.885 ************************************ 00:05:59.885 08:48:06 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:05:59.885 08:48:06 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:05:59.885 08:48:06 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:05:59.885 08:48:06 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:05:59.885 08:48:06 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:05:59.885 08:48:06 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:00.822 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:06:00.822 08:48:07 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:06:00.822 08:48:07 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:06:00.822 08:48:07 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:06:00.822 08:48:07 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:06:00.822 08:48:07 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:06:00.822 08:48:07 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:06:00.822 08:48:07 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:06:00.822 08:48:07 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:06:00.822 08:48:07 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:00.822 08:48:07 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:01.388 00:06:01.388 real 0m1.423s 00:06:01.388 user 0m0.572s 00:06:01.388 sys 0m0.798s 00:06:01.388 08:48:08 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:01.388 08:48:08 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:06:01.388 ************************************ 00:06:01.388 END TEST denied 00:06:01.388 ************************************ 00:06:01.388 08:48:08 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:06:01.388 08:48:08 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:01.388 08:48:08 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:01.388 08:48:08 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:06:01.388 ************************************ 00:06:01.388 START TEST allowed 00:06:01.388 ************************************ 00:06:01.388 08:48:08 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:06:01.388 08:48:08 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:06:01.388 08:48:08 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:06:01.388 08:48:08 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:06:01.388 08:48:08 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:06:01.388 08:48:08 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:02.323 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:02.323 08:48:09 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:06:02.323 08:48:09 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:06:02.323 08:48:09 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:06:02.323 08:48:09 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:06:02.323 08:48:09 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:06:02.323 08:48:09 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:06:02.323 08:48:09 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:06:02.323 08:48:09 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:06:02.323 08:48:09 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:02.323 08:48:09 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:02.889 00:06:02.889 real 0m1.557s 00:06:02.889 user 0m0.670s 00:06:02.889 sys 0m0.861s 00:06:02.889 08:48:09 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:02.889 08:48:09 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:06:02.889 ************************************ 00:06:02.889 END TEST allowed 00:06:02.889 ************************************ 00:06:02.889 00:06:02.889 real 0m4.763s 00:06:02.889 user 0m2.085s 00:06:02.889 sys 0m2.604s 00:06:02.889 08:48:09 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:02.889 08:48:09 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:06:02.889 ************************************ 00:06:02.889 END TEST acl 00:06:02.889 ************************************ 00:06:02.889 08:48:09 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:06:02.889 08:48:09 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:02.889 08:48:09 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:02.889 08:48:09 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:02.889 ************************************ 00:06:02.889 START TEST hugepages 00:06:02.889 ************************************ 00:06:02.889 08:48:09 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:06:02.889 * Looking for test storage... 00:06:02.889 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:06:03.149 08:48:10 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:06:03.149 08:48:10 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:06:03.149 08:48:10 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:06:03.149 08:48:10 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:06:03.149 08:48:10 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:06:03.149 08:48:10 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 5807968 kB' 'MemAvailable: 7417096 kB' 'Buffers: 2436 kB' 'Cached: 1822784 kB' 'SwapCached: 0 kB' 'Active: 435908 kB' 'Inactive: 1494664 kB' 'Active(anon): 115840 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1494664 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'AnonPages: 106720 kB' 'Mapped: 48732 kB' 'Shmem: 10488 kB' 'KReclaimable: 62672 kB' 'Slab: 134600 kB' 'SReclaimable: 62672 kB' 'SUnreclaim: 71928 kB' 'KernelStack: 6488 kB' 'PageTables: 4332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412440 kB' 'Committed_AS: 337892 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:03.150 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:06:03.151 08:48:10 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:06:03.152 08:48:10 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:06:03.152 08:48:10 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:06:03.152 08:48:10 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:06:03.152 08:48:10 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:06:03.152 08:48:10 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:06:03.152 08:48:10 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:03.152 08:48:10 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:06:03.152 08:48:10 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:03.152 08:48:10 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:03.152 08:48:10 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:06:03.152 08:48:10 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:06:03.152 08:48:10 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:06:03.152 08:48:10 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:03.152 08:48:10 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:06:03.152 08:48:10 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:03.152 08:48:10 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:06:03.152 08:48:10 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:06:03.152 08:48:10 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:06:03.152 08:48:10 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:06:03.152 08:48:10 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:03.152 08:48:10 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:03.152 08:48:10 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:03.152 ************************************ 00:06:03.152 START TEST default_setup 00:06:03.152 ************************************ 00:06:03.152 08:48:10 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # default_setup 00:06:03.152 08:48:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:06:03.152 08:48:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:06:03.152 08:48:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:06:03.152 08:48:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:06:03.152 08:48:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:06:03.152 08:48:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:06:03.152 08:48:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:03.152 08:48:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:06:03.152 08:48:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:06:03.152 08:48:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:06:03.152 08:48:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:06:03.152 08:48:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:06:03.152 08:48:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:06:03.152 08:48:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:03.152 08:48:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:03.152 08:48:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:06:03.152 08:48:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:06:03.152 08:48:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:06:03.152 08:48:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:06:03.152 08:48:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:06:03.152 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:06:03.152 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:03.718 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:03.718 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:03.979 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7901400 kB' 'MemAvailable: 9510404 kB' 'Buffers: 2436 kB' 'Cached: 1822776 kB' 'SwapCached: 0 kB' 'Active: 452920 kB' 'Inactive: 1494672 kB' 'Active(anon): 132852 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1494672 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 123936 kB' 'Mapped: 48976 kB' 'Shmem: 10464 kB' 'KReclaimable: 62408 kB' 'Slab: 134100 kB' 'SReclaimable: 62408 kB' 'SUnreclaim: 71692 kB' 'KernelStack: 6464 kB' 'PageTables: 4352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354948 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.979 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7901624 kB' 'MemAvailable: 9510632 kB' 'Buffers: 2436 kB' 'Cached: 1822776 kB' 'SwapCached: 0 kB' 'Active: 452580 kB' 'Inactive: 1494676 kB' 'Active(anon): 132512 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1494676 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 123652 kB' 'Mapped: 48916 kB' 'Shmem: 10464 kB' 'KReclaimable: 62408 kB' 'Slab: 134092 kB' 'SReclaimable: 62408 kB' 'SUnreclaim: 71684 kB' 'KernelStack: 6416 kB' 'PageTables: 4236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354948 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.980 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7901624 kB' 'MemAvailable: 9510632 kB' 'Buffers: 2436 kB' 'Cached: 1822776 kB' 'SwapCached: 0 kB' 'Active: 452512 kB' 'Inactive: 1494676 kB' 'Active(anon): 132444 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1494676 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 123592 kB' 'Mapped: 48916 kB' 'Shmem: 10464 kB' 'KReclaimable: 62408 kB' 'Slab: 134088 kB' 'SReclaimable: 62408 kB' 'SUnreclaim: 71680 kB' 'KernelStack: 6416 kB' 'PageTables: 4236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354948 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.981 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.982 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:06:03.983 nr_hugepages=1024 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:06:03.983 resv_hugepages=0 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:03.983 surplus_hugepages=0 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:03.983 anon_hugepages=0 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7901876 kB' 'MemAvailable: 9510884 kB' 'Buffers: 2436 kB' 'Cached: 1822776 kB' 'SwapCached: 0 kB' 'Active: 452524 kB' 'Inactive: 1494676 kB' 'Active(anon): 132456 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1494676 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 123608 kB' 'Mapped: 48976 kB' 'Shmem: 10464 kB' 'KReclaimable: 62408 kB' 'Slab: 134088 kB' 'SReclaimable: 62408 kB' 'SUnreclaim: 71680 kB' 'KernelStack: 6400 kB' 'PageTables: 4188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354948 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.983 08:48:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.983 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.983 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.983 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.983 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.983 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.983 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.983 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.983 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.983 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.983 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.983 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7901876 kB' 'MemUsed: 4340100 kB' 'SwapCached: 0 kB' 'Active: 452636 kB' 'Inactive: 1494680 kB' 'Active(anon): 132568 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1494680 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'FilePages: 1825212 kB' 'Mapped: 48916 kB' 'AnonPages: 123756 kB' 'Shmem: 10464 kB' 'KernelStack: 6384 kB' 'PageTables: 4140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62408 kB' 'Slab: 134076 kB' 'SReclaimable: 62408 kB' 'SUnreclaim: 71668 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.984 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:03.985 node0=1024 expecting 1024 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:06:03.985 00:06:03.985 real 0m0.977s 00:06:03.985 user 0m0.483s 00:06:03.985 sys 0m0.443s 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:03.985 08:48:11 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:06:03.985 ************************************ 00:06:03.985 END TEST default_setup 00:06:03.985 ************************************ 00:06:03.985 08:48:11 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:06:03.986 08:48:11 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:03.986 08:48:11 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:03.986 08:48:11 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:04.244 ************************************ 00:06:04.244 START TEST per_node_1G_alloc 00:06:04.244 ************************************ 00:06:04.244 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # per_node_1G_alloc 00:06:04.244 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:06:04.244 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:06:04.244 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:06:04.244 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:06:04.244 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:06:04.244 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:06:04.244 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:06:04.244 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:04.244 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:06:04.244 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:06:04.244 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:06:04.244 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:04.244 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:06:04.244 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:06:04.244 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:04.244 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:04.244 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:06:04.244 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:06:04.244 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:06:04.244 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:06:04.244 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:06:04.244 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:06:04.244 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:06:04.244 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:04.244 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:04.507 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:04.507 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:04.507 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:04.507 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:06:04.507 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:06:04.507 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:06:04.507 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:04.507 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:04.507 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:04.507 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:04.507 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:04.507 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:04.507 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:04.507 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:04.507 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:06:04.507 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:06:04.507 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:04.507 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:04.507 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:04.507 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:04.507 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:04.507 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:04.507 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.507 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8949908 kB' 'MemAvailable: 10558920 kB' 'Buffers: 2436 kB' 'Cached: 1822776 kB' 'SwapCached: 0 kB' 'Active: 453048 kB' 'Inactive: 1494680 kB' 'Active(anon): 132980 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1494680 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 123900 kB' 'Mapped: 49048 kB' 'Shmem: 10464 kB' 'KReclaimable: 62408 kB' 'Slab: 134124 kB' 'SReclaimable: 62408 kB' 'SUnreclaim: 71716 kB' 'KernelStack: 6420 kB' 'PageTables: 4312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 354948 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:06:04.507 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.507 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.507 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.508 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8950548 kB' 'MemAvailable: 10559564 kB' 'Buffers: 2436 kB' 'Cached: 1822780 kB' 'SwapCached: 0 kB' 'Active: 452780 kB' 'Inactive: 1494684 kB' 'Active(anon): 132712 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1494684 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 123632 kB' 'Mapped: 48800 kB' 'Shmem: 10464 kB' 'KReclaimable: 62408 kB' 'Slab: 134112 kB' 'SReclaimable: 62408 kB' 'SUnreclaim: 71704 kB' 'KernelStack: 6432 kB' 'PageTables: 4280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 354948 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.509 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.510 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8950548 kB' 'MemAvailable: 10559564 kB' 'Buffers: 2436 kB' 'Cached: 1822780 kB' 'SwapCached: 0 kB' 'Active: 452632 kB' 'Inactive: 1494684 kB' 'Active(anon): 132564 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1494684 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 123768 kB' 'Mapped: 48800 kB' 'Shmem: 10464 kB' 'KReclaimable: 62408 kB' 'Slab: 134116 kB' 'SReclaimable: 62408 kB' 'SUnreclaim: 71708 kB' 'KernelStack: 6448 kB' 'PageTables: 4312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 354948 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.511 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.512 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:04.513 nr_hugepages=512 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:06:04.513 resv_hugepages=0 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:04.513 surplus_hugepages=0 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:04.513 anon_hugepages=0 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8950548 kB' 'MemAvailable: 10559564 kB' 'Buffers: 2436 kB' 'Cached: 1822780 kB' 'SwapCached: 0 kB' 'Active: 452328 kB' 'Inactive: 1494684 kB' 'Active(anon): 132260 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1494684 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 123448 kB' 'Mapped: 48800 kB' 'Shmem: 10464 kB' 'KReclaimable: 62408 kB' 'Slab: 134112 kB' 'SReclaimable: 62408 kB' 'SUnreclaim: 71704 kB' 'KernelStack: 6416 kB' 'PageTables: 4224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 354948 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.513 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.514 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:06:04.515 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:04.775 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:04.775 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:04.775 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:04.775 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:04.775 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:04.775 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:06:04.775 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:06:04.775 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:04.775 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:04.775 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:04.775 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:04.775 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:04.775 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:04.775 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.775 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.775 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8950548 kB' 'MemUsed: 3291428 kB' 'SwapCached: 0 kB' 'Active: 452348 kB' 'Inactive: 1494684 kB' 'Active(anon): 132280 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1494684 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'FilePages: 1825216 kB' 'Mapped: 48800 kB' 'AnonPages: 123472 kB' 'Shmem: 10464 kB' 'KernelStack: 6432 kB' 'PageTables: 4268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62408 kB' 'Slab: 134112 kB' 'SReclaimable: 62408 kB' 'SUnreclaim: 71704 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:06:04.775 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.775 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.775 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.775 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.775 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.775 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.776 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.777 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.777 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.777 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.777 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.777 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.777 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.777 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.777 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.777 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.777 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.777 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.777 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.777 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.777 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.777 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.777 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.777 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.777 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.777 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.777 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.777 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.777 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.777 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:06:04.777 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:06:04.777 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:04.777 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:04.777 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:04.777 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:04.777 node0=512 expecting 512 00:06:04.777 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:06:04.777 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:06:04.777 00:06:04.777 real 0m0.549s 00:06:04.777 user 0m0.257s 00:06:04.777 sys 0m0.301s 00:06:04.777 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:04.777 08:48:11 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:06:04.777 ************************************ 00:06:04.777 END TEST per_node_1G_alloc 00:06:04.777 ************************************ 00:06:04.777 08:48:11 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:06:04.777 08:48:11 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:04.777 08:48:11 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:04.777 08:48:11 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:04.777 ************************************ 00:06:04.777 START TEST even_2G_alloc 00:06:04.777 ************************************ 00:06:04.777 08:48:11 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:06:04.777 08:48:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:06:04.777 08:48:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:06:04.777 08:48:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:06:04.777 08:48:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:04.777 08:48:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:06:04.777 08:48:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:06:04.777 08:48:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:04.777 08:48:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:04.777 08:48:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:06:04.777 08:48:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:06:04.777 08:48:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:04.777 08:48:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:04.777 08:48:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:04.777 08:48:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:06:04.777 08:48:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:04.777 08:48:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:06:04.777 08:48:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:06:04.777 08:48:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:06:04.777 08:48:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:04.777 08:48:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:06:04.777 08:48:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:06:04.777 08:48:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:06:04.777 08:48:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:04.777 08:48:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:05.038 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:05.038 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:05.038 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:05.038 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:06:05.038 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:06:05.038 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:05.038 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:05.038 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:05.038 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:05.038 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7902468 kB' 'MemAvailable: 9511484 kB' 'Buffers: 2436 kB' 'Cached: 1822780 kB' 'SwapCached: 0 kB' 'Active: 453328 kB' 'Inactive: 1494684 kB' 'Active(anon): 133260 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1494684 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 124372 kB' 'Mapped: 48960 kB' 'Shmem: 10464 kB' 'KReclaimable: 62408 kB' 'Slab: 134164 kB' 'SReclaimable: 62408 kB' 'SUnreclaim: 71756 kB' 'KernelStack: 6436 kB' 'PageTables: 4356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354948 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.039 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7902216 kB' 'MemAvailable: 9511232 kB' 'Buffers: 2436 kB' 'Cached: 1822780 kB' 'SwapCached: 0 kB' 'Active: 452632 kB' 'Inactive: 1494684 kB' 'Active(anon): 132564 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1494684 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 123732 kB' 'Mapped: 48800 kB' 'Shmem: 10464 kB' 'KReclaimable: 62408 kB' 'Slab: 134156 kB' 'SReclaimable: 62408 kB' 'SUnreclaim: 71748 kB' 'KernelStack: 6416 kB' 'PageTables: 4228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354948 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.040 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.041 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.042 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7902216 kB' 'MemAvailable: 9511232 kB' 'Buffers: 2436 kB' 'Cached: 1822780 kB' 'SwapCached: 0 kB' 'Active: 452624 kB' 'Inactive: 1494684 kB' 'Active(anon): 132556 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1494684 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 123708 kB' 'Mapped: 48800 kB' 'Shmem: 10464 kB' 'KReclaimable: 62408 kB' 'Slab: 134152 kB' 'SReclaimable: 62408 kB' 'SUnreclaim: 71744 kB' 'KernelStack: 6416 kB' 'PageTables: 4228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354948 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:06:05.043 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.043 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.043 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.043 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.043 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.043 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.043 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.043 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.043 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.043 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.043 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.043 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.043 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.043 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.043 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.043 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.043 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.043 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.043 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.043 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.043 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.043 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.043 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.043 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.043 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.043 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.043 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.043 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.043 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.043 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.043 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.043 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.043 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.043 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.043 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.043 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.043 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.043 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.043 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.043 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.043 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.043 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.043 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.043 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.043 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.043 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.043 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.043 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.043 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.043 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.043 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.043 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.043 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.043 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.043 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.043 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.043 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.043 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.043 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.043 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.043 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.043 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.043 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.043 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.043 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.043 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.043 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.043 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.043 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.043 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.043 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.043 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.043 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.043 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.043 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.043 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.043 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.043 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.043 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.043 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.043 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.043 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.043 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.043 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.043 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.043 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.043 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.043 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.044 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.044 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.044 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.044 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.044 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.044 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.044 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.044 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.044 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.044 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.044 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.044 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.305 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.305 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.305 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.305 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.305 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.305 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.305 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.305 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.305 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.305 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.305 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.305 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.305 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.305 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.305 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.305 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.305 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.305 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.305 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.305 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.305 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.305 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.305 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.305 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.305 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.305 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.305 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.305 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.305 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.305 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.305 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.305 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.305 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.305 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.305 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.305 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.305 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.305 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.305 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.305 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.305 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.305 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.305 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.305 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.305 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.305 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.305 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.305 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.305 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.305 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.305 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.305 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.305 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.305 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.305 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.305 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.305 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.305 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.305 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.305 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.305 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.305 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.305 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.305 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.305 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.305 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.305 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.305 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.305 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:05.306 nr_hugepages=1024 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:06:05.306 resv_hugepages=0 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:05.306 surplus_hugepages=0 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:05.306 anon_hugepages=0 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7902216 kB' 'MemAvailable: 9511232 kB' 'Buffers: 2436 kB' 'Cached: 1822780 kB' 'SwapCached: 0 kB' 'Active: 452644 kB' 'Inactive: 1494684 kB' 'Active(anon): 132576 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1494684 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 123748 kB' 'Mapped: 48800 kB' 'Shmem: 10464 kB' 'KReclaimable: 62408 kB' 'Slab: 134152 kB' 'SReclaimable: 62408 kB' 'SUnreclaim: 71744 kB' 'KernelStack: 6432 kB' 'PageTables: 4272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354948 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.306 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.307 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7902216 kB' 'MemUsed: 4339760 kB' 'SwapCached: 0 kB' 'Active: 452448 kB' 'Inactive: 1494684 kB' 'Active(anon): 132380 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1494684 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'FilePages: 1825216 kB' 'Mapped: 48800 kB' 'AnonPages: 123512 kB' 'Shmem: 10464 kB' 'KernelStack: 6416 kB' 'PageTables: 4228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62408 kB' 'Slab: 134152 kB' 'SReclaimable: 62408 kB' 'SUnreclaim: 71744 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.308 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:05.309 node0=1024 expecting 1024 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:06:05.309 00:06:05.309 real 0m0.525s 00:06:05.309 user 0m0.258s 00:06:05.309 sys 0m0.301s 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:05.309 08:48:12 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:06:05.309 ************************************ 00:06:05.309 END TEST even_2G_alloc 00:06:05.309 ************************************ 00:06:05.309 08:48:12 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:06:05.309 08:48:12 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:05.309 08:48:12 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:05.309 08:48:12 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:05.309 ************************************ 00:06:05.309 START TEST odd_alloc 00:06:05.309 ************************************ 00:06:05.309 08:48:12 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:06:05.309 08:48:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:06:05.309 08:48:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:06:05.309 08:48:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:06:05.309 08:48:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:05.309 08:48:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:06:05.309 08:48:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:06:05.309 08:48:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:05.309 08:48:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:05.309 08:48:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:06:05.309 08:48:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:06:05.309 08:48:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:05.309 08:48:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:05.309 08:48:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:05.309 08:48:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:06:05.310 08:48:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:05.310 08:48:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:06:05.310 08:48:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:06:05.310 08:48:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:06:05.310 08:48:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:05.310 08:48:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:06:05.310 08:48:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:06:05.310 08:48:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:06:05.310 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:05.310 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:05.569 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:05.569 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:05.569 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:05.569 08:48:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:06:05.569 08:48:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:06:05.569 08:48:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:05.569 08:48:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:05.569 08:48:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:05.569 08:48:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:05.569 08:48:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:05.569 08:48:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:05.569 08:48:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:05.569 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:05.569 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:06:05.569 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:05.569 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:05.569 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:05.569 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:05.569 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7901516 kB' 'MemAvailable: 9510532 kB' 'Buffers: 2436 kB' 'Cached: 1822780 kB' 'SwapCached: 0 kB' 'Active: 452688 kB' 'Inactive: 1494684 kB' 'Active(anon): 132620 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1494684 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 123716 kB' 'Mapped: 48872 kB' 'Shmem: 10464 kB' 'KReclaimable: 62408 kB' 'Slab: 134232 kB' 'SReclaimable: 62408 kB' 'SUnreclaim: 71824 kB' 'KernelStack: 6420 kB' 'PageTables: 4316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 354948 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.570 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.571 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.571 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.571 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.571 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.571 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.571 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.571 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.571 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.571 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.571 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.571 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.571 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.571 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.571 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.571 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.571 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.571 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.571 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.571 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.571 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.571 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.571 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.571 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.571 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.571 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.571 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.571 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.571 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.571 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.571 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.571 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.571 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.571 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.571 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.571 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.571 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.571 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.571 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.571 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.571 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.571 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.571 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.571 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.571 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.571 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.571 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.571 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.571 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.571 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.571 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.571 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.571 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.571 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.571 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.571 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.571 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:06:05.571 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:05.571 08:48:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:05.835 08:48:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:05.835 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:05.835 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:06:05.835 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:05.835 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:05.835 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:05.835 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:05.835 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:05.835 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:05.835 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:05.835 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.835 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.835 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7901516 kB' 'MemAvailable: 9510532 kB' 'Buffers: 2436 kB' 'Cached: 1822780 kB' 'SwapCached: 0 kB' 'Active: 452520 kB' 'Inactive: 1494684 kB' 'Active(anon): 132452 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1494684 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 123580 kB' 'Mapped: 48812 kB' 'Shmem: 10464 kB' 'KReclaimable: 62408 kB' 'Slab: 134216 kB' 'SReclaimable: 62408 kB' 'SUnreclaim: 71808 kB' 'KernelStack: 6400 kB' 'PageTables: 4184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 354948 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:06:05.835 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.835 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.835 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.835 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.835 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.835 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.835 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.835 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.835 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.835 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.835 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.835 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.835 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.835 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.835 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.835 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.835 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.835 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.835 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.835 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.835 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.835 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.835 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.835 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.835 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.835 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.835 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.835 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.835 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.835 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.835 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.835 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.835 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.835 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.835 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.835 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.835 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.835 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.835 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.835 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.835 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.835 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.835 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.835 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.835 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.835 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.835 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.835 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.835 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.835 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.835 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.835 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.835 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.835 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.835 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.835 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.835 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.835 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.835 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.835 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.835 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.835 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.835 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.836 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7901768 kB' 'MemAvailable: 9510784 kB' 'Buffers: 2436 kB' 'Cached: 1822780 kB' 'SwapCached: 0 kB' 'Active: 452764 kB' 'Inactive: 1494684 kB' 'Active(anon): 132696 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1494684 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 123828 kB' 'Mapped: 48812 kB' 'Shmem: 10464 kB' 'KReclaimable: 62408 kB' 'Slab: 134216 kB' 'SReclaimable: 62408 kB' 'SUnreclaim: 71808 kB' 'KernelStack: 6400 kB' 'PageTables: 4184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 354948 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.837 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.838 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:05.839 nr_hugepages=1025 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:06:05.839 resv_hugepages=0 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:05.839 surplus_hugepages=0 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:05.839 anon_hugepages=0 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7901768 kB' 'MemAvailable: 9510784 kB' 'Buffers: 2436 kB' 'Cached: 1822780 kB' 'SwapCached: 0 kB' 'Active: 452496 kB' 'Inactive: 1494684 kB' 'Active(anon): 132428 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1494684 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 123556 kB' 'Mapped: 48804 kB' 'Shmem: 10464 kB' 'KReclaimable: 62408 kB' 'Slab: 134232 kB' 'SReclaimable: 62408 kB' 'SUnreclaim: 71824 kB' 'KernelStack: 6416 kB' 'PageTables: 4228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 354948 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.839 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.840 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7901768 kB' 'MemUsed: 4340208 kB' 'SwapCached: 0 kB' 'Active: 452756 kB' 'Inactive: 1494684 kB' 'Active(anon): 132688 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1494684 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'FilePages: 1825216 kB' 'Mapped: 48804 kB' 'AnonPages: 123816 kB' 'Shmem: 10464 kB' 'KernelStack: 6416 kB' 'PageTables: 4228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62408 kB' 'Slab: 134232 kB' 'SReclaimable: 62408 kB' 'SUnreclaim: 71824 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.841 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:05.842 node0=1025 expecting 1025 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:06:05.842 00:06:05.842 real 0m0.530s 00:06:05.842 user 0m0.250s 00:06:05.842 sys 0m0.312s 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:05.842 08:48:12 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:06:05.842 ************************************ 00:06:05.842 END TEST odd_alloc 00:06:05.842 ************************************ 00:06:05.842 08:48:12 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:06:05.842 08:48:12 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:05.842 08:48:12 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:05.842 08:48:12 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:05.842 ************************************ 00:06:05.842 START TEST custom_alloc 00:06:05.842 ************************************ 00:06:05.842 08:48:12 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:06:05.842 08:48:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:06:05.842 08:48:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:06:05.842 08:48:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:06:05.842 08:48:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:06:05.842 08:48:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:06:05.842 08:48:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:06:05.842 08:48:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:06:05.842 08:48:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:06:05.842 08:48:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:05.842 08:48:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:06:05.842 08:48:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:06:05.842 08:48:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:05.842 08:48:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:05.842 08:48:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:06:05.842 08:48:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:06:05.842 08:48:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:05.842 08:48:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:05.842 08:48:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:05.842 08:48:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:06:05.842 08:48:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:05.842 08:48:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:06:05.843 08:48:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:06:05.843 08:48:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:06:05.843 08:48:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:05.843 08:48:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:06:05.843 08:48:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:06:05.843 08:48:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:06:05.843 08:48:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:06:05.843 08:48:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:06:05.843 08:48:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:06:05.843 08:48:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:05.843 08:48:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:05.843 08:48:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:06:05.843 08:48:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:06:05.843 08:48:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:05.843 08:48:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:05.843 08:48:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:05.843 08:48:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:06:05.843 08:48:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:06:05.843 08:48:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:06:05.843 08:48:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:06:05.843 08:48:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:06:05.843 08:48:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:06:05.843 08:48:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:05.843 08:48:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:06.102 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:06.102 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:06.102 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:06.102 08:48:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:06:06.102 08:48:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:06:06.102 08:48:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:06:06.369 08:48:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:06.369 08:48:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:06.369 08:48:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:06.369 08:48:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:06.369 08:48:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:06.369 08:48:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:06.369 08:48:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:06.369 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:06.369 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:06:06.369 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:06.369 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:06.369 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:06.369 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:06.369 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:06.369 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:06.369 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:06.369 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.369 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.369 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8952040 kB' 'MemAvailable: 10561056 kB' 'Buffers: 2436 kB' 'Cached: 1822780 kB' 'SwapCached: 0 kB' 'Active: 452872 kB' 'Inactive: 1494684 kB' 'Active(anon): 132804 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1494684 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 123876 kB' 'Mapped: 49128 kB' 'Shmem: 10464 kB' 'KReclaimable: 62408 kB' 'Slab: 134244 kB' 'SReclaimable: 62408 kB' 'SUnreclaim: 71836 kB' 'KernelStack: 6440 kB' 'PageTables: 4156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 354948 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:06:06.369 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.369 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.369 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.369 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.369 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.369 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.369 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.369 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.369 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.369 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.369 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.369 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.369 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.369 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.369 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.369 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.369 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.369 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.369 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.369 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.369 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.369 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.369 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.369 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.369 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.369 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.369 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.369 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.369 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.369 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.369 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.369 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.369 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.369 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.369 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.369 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.369 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.369 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.369 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.369 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.369 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.369 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.369 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.369 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.369 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.369 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.369 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.369 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.369 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.369 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.369 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:06.370 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8952040 kB' 'MemAvailable: 10561056 kB' 'Buffers: 2436 kB' 'Cached: 1822780 kB' 'SwapCached: 0 kB' 'Active: 452588 kB' 'Inactive: 1494684 kB' 'Active(anon): 132520 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1494684 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 123592 kB' 'Mapped: 48988 kB' 'Shmem: 10464 kB' 'KReclaimable: 62408 kB' 'Slab: 134240 kB' 'SReclaimable: 62408 kB' 'SUnreclaim: 71832 kB' 'KernelStack: 6400 kB' 'PageTables: 4184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 354948 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.371 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.372 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8952560 kB' 'MemAvailable: 10561576 kB' 'Buffers: 2436 kB' 'Cached: 1822780 kB' 'SwapCached: 0 kB' 'Active: 452588 kB' 'Inactive: 1494684 kB' 'Active(anon): 132520 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1494684 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 123592 kB' 'Mapped: 48988 kB' 'Shmem: 10464 kB' 'KReclaimable: 62408 kB' 'Slab: 134240 kB' 'SReclaimable: 62408 kB' 'SUnreclaim: 71832 kB' 'KernelStack: 6400 kB' 'PageTables: 4184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 354948 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.373 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:06.374 nr_hugepages=512 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:06:06.374 resv_hugepages=0 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:06.374 surplus_hugepages=0 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:06.374 anon_hugepages=0 00:06:06.374 08:48:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8952640 kB' 'MemAvailable: 10561656 kB' 'Buffers: 2436 kB' 'Cached: 1822780 kB' 'SwapCached: 0 kB' 'Active: 452404 kB' 'Inactive: 1494684 kB' 'Active(anon): 132336 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1494684 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 123692 kB' 'Mapped: 48864 kB' 'Shmem: 10464 kB' 'KReclaimable: 62408 kB' 'Slab: 134240 kB' 'SReclaimable: 62408 kB' 'SUnreclaim: 71832 kB' 'KernelStack: 6416 kB' 'PageTables: 4232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 354948 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.375 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:06.376 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8952640 kB' 'MemUsed: 3289336 kB' 'SwapCached: 0 kB' 'Active: 452328 kB' 'Inactive: 1494684 kB' 'Active(anon): 132260 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1494684 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'FilePages: 1825216 kB' 'Mapped: 48864 kB' 'AnonPages: 123592 kB' 'Shmem: 10464 kB' 'KernelStack: 6384 kB' 'PageTables: 4144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62408 kB' 'Slab: 134240 kB' 'SReclaimable: 62408 kB' 'SUnreclaim: 71832 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.377 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.378 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.378 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.378 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.378 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.378 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.378 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.378 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.378 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.378 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.378 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.378 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.378 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.378 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.378 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.378 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.378 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.378 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.378 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.378 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.378 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.378 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.378 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.378 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.378 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.378 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.378 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.378 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.378 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.378 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.378 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.378 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.378 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.378 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.378 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.378 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.378 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.378 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.378 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:06.378 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.378 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.378 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.378 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:06:06.378 08:48:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:06.378 08:48:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:06.378 08:48:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:06.378 08:48:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:06.378 08:48:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:06.378 node0=512 expecting 512 00:06:06.378 08:48:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:06:06.378 08:48:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:06:06.378 00:06:06.378 real 0m0.514s 00:06:06.378 user 0m0.248s 00:06:06.378 sys 0m0.298s 00:06:06.378 08:48:13 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:06.378 08:48:13 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:06:06.378 ************************************ 00:06:06.378 END TEST custom_alloc 00:06:06.378 ************************************ 00:06:06.378 08:48:13 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:06:06.378 08:48:13 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:06.378 08:48:13 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:06.378 08:48:13 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:06.378 ************************************ 00:06:06.378 START TEST no_shrink_alloc 00:06:06.378 ************************************ 00:06:06.378 08:48:13 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:06:06.378 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:06:06.378 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:06:06.378 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:06:06.378 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:06:06.378 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:06:06.378 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:06:06.378 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:06.378 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:06:06.378 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:06:06.378 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:06:06.378 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:06.378 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:06:06.378 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:06:06.378 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:06.378 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:06.378 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:06:06.378 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:06:06.378 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:06:06.378 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:06:06.378 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:06:06.378 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:06.378 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:06.652 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:06.652 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:06.652 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:06.916 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:06:06.916 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:06:06.916 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:06.916 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:06.916 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:06.916 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:06.916 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:06.916 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:06.916 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:06.916 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:06.916 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:06.916 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:06.916 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:06.916 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:06.916 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:06.916 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:06.916 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:06.916 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:06.916 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.916 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7900792 kB' 'MemAvailable: 9509808 kB' 'Buffers: 2436 kB' 'Cached: 1822780 kB' 'SwapCached: 0 kB' 'Active: 453244 kB' 'Inactive: 1494684 kB' 'Active(anon): 133176 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1494684 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 124044 kB' 'Mapped: 48904 kB' 'Shmem: 10464 kB' 'KReclaimable: 62408 kB' 'Slab: 134244 kB' 'SReclaimable: 62408 kB' 'SUnreclaim: 71836 kB' 'KernelStack: 6420 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354948 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:06:06.916 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.916 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.916 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.917 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7900792 kB' 'MemAvailable: 9509808 kB' 'Buffers: 2436 kB' 'Cached: 1822780 kB' 'SwapCached: 0 kB' 'Active: 452704 kB' 'Inactive: 1494684 kB' 'Active(anon): 132636 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1494684 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 123712 kB' 'Mapped: 48804 kB' 'Shmem: 10464 kB' 'KReclaimable: 62408 kB' 'Slab: 134240 kB' 'SReclaimable: 62408 kB' 'SUnreclaim: 71832 kB' 'KernelStack: 6400 kB' 'PageTables: 4180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354948 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.918 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.919 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7900792 kB' 'MemAvailable: 9509808 kB' 'Buffers: 2436 kB' 'Cached: 1822780 kB' 'SwapCached: 0 kB' 'Active: 452716 kB' 'Inactive: 1494684 kB' 'Active(anon): 132648 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1494684 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 123796 kB' 'Mapped: 48804 kB' 'Shmem: 10464 kB' 'KReclaimable: 62408 kB' 'Slab: 134240 kB' 'SReclaimable: 62408 kB' 'SUnreclaim: 71832 kB' 'KernelStack: 6432 kB' 'PageTables: 4268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354948 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.920 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.921 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:06.922 nr_hugepages=1024 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:06:06.922 resv_hugepages=0 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:06.922 surplus_hugepages=0 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:06.922 anon_hugepages=0 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7900792 kB' 'MemAvailable: 9509808 kB' 'Buffers: 2436 kB' 'Cached: 1822780 kB' 'SwapCached: 0 kB' 'Active: 452712 kB' 'Inactive: 1494684 kB' 'Active(anon): 132644 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1494684 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 123788 kB' 'Mapped: 48804 kB' 'Shmem: 10464 kB' 'KReclaimable: 62408 kB' 'Slab: 134240 kB' 'SReclaimable: 62408 kB' 'SUnreclaim: 71832 kB' 'KernelStack: 6432 kB' 'PageTables: 4268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354948 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.922 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.923 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7901312 kB' 'MemUsed: 4340664 kB' 'SwapCached: 0 kB' 'Active: 452676 kB' 'Inactive: 1494684 kB' 'Active(anon): 132608 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1494684 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'FilePages: 1825216 kB' 'Mapped: 48804 kB' 'AnonPages: 123756 kB' 'Shmem: 10464 kB' 'KernelStack: 6416 kB' 'PageTables: 4224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62408 kB' 'Slab: 134240 kB' 'SReclaimable: 62408 kB' 'SUnreclaim: 71832 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.924 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:06.925 node0=1024 expecting 1024 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:06:06.925 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:06:06.926 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:06:06.926 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:06:06.926 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:06:06.926 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:06.926 08:48:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:07.185 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:07.185 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:07.185 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:07.185 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:06:07.185 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:06:07.185 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:06:07.185 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:07.185 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:07.185 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:07.185 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:07.185 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:07.449 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:07.449 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:07.449 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:07.449 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:07.449 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:07.449 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:07.449 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:07.449 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:07.449 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:07.449 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:07.449 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:07.449 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.449 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.449 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7901244 kB' 'MemAvailable: 9510260 kB' 'Buffers: 2436 kB' 'Cached: 1822780 kB' 'SwapCached: 0 kB' 'Active: 448280 kB' 'Inactive: 1494684 kB' 'Active(anon): 128212 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1494684 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 119372 kB' 'Mapped: 48372 kB' 'Shmem: 10464 kB' 'KReclaimable: 62408 kB' 'Slab: 134140 kB' 'SReclaimable: 62408 kB' 'SUnreclaim: 71732 kB' 'KernelStack: 6356 kB' 'PageTables: 3824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336480 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:06:07.449 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.449 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.449 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.449 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.449 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.449 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.449 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.449 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.449 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.449 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.449 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.449 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.449 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.449 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.449 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.449 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.449 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.449 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.449 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.449 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.449 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.449 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.449 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.449 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.449 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.449 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.449 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.449 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.449 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.449 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.449 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.449 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.449 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.449 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.449 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.449 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.449 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.449 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.449 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.449 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.449 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.449 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.449 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.449 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.449 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.449 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.449 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.449 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.449 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.449 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.449 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.449 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.449 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:07.450 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7901488 kB' 'MemAvailable: 9510504 kB' 'Buffers: 2436 kB' 'Cached: 1822780 kB' 'SwapCached: 0 kB' 'Active: 447648 kB' 'Inactive: 1494684 kB' 'Active(anon): 127580 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1494684 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 118920 kB' 'Mapped: 48312 kB' 'Shmem: 10464 kB' 'KReclaimable: 62404 kB' 'Slab: 134080 kB' 'SReclaimable: 62404 kB' 'SUnreclaim: 71676 kB' 'KernelStack: 6260 kB' 'PageTables: 3768 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336480 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.451 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:07.452 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7901488 kB' 'MemAvailable: 9510504 kB' 'Buffers: 2436 kB' 'Cached: 1822780 kB' 'SwapCached: 0 kB' 'Active: 447684 kB' 'Inactive: 1494684 kB' 'Active(anon): 127616 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1494684 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 118728 kB' 'Mapped: 48192 kB' 'Shmem: 10464 kB' 'KReclaimable: 62404 kB' 'Slab: 134072 kB' 'SReclaimable: 62404 kB' 'SUnreclaim: 71668 kB' 'KernelStack: 6304 kB' 'PageTables: 3752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336480 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.453 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.454 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:07.455 nr_hugepages=1024 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:06:07.455 resv_hugepages=0 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:07.455 surplus_hugepages=0 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:07.455 anon_hugepages=0 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7901488 kB' 'MemAvailable: 9510504 kB' 'Buffers: 2436 kB' 'Cached: 1822780 kB' 'SwapCached: 0 kB' 'Active: 447272 kB' 'Inactive: 1494684 kB' 'Active(anon): 127204 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1494684 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 118584 kB' 'Mapped: 48064 kB' 'Shmem: 10464 kB' 'KReclaimable: 62404 kB' 'Slab: 134056 kB' 'SReclaimable: 62404 kB' 'SUnreclaim: 71652 kB' 'KernelStack: 6320 kB' 'PageTables: 3760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336480 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.455 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:07.456 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7901488 kB' 'MemUsed: 4340488 kB' 'SwapCached: 0 kB' 'Active: 447248 kB' 'Inactive: 1494684 kB' 'Active(anon): 127180 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1494684 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'FilePages: 1825216 kB' 'Mapped: 48064 kB' 'AnonPages: 118584 kB' 'Shmem: 10464 kB' 'KernelStack: 6320 kB' 'PageTables: 3760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62404 kB' 'Slab: 134056 kB' 'SReclaimable: 62404 kB' 'SUnreclaim: 71652 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.457 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.458 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.458 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.458 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.458 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.458 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.458 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.458 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.458 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.458 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.458 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.458 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.458 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.458 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.458 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.458 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.458 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.458 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.458 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.458 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.458 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.458 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.458 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.458 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.458 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.458 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.458 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.458 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.458 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.458 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.458 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.458 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.458 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.458 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.458 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.458 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.458 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.458 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.458 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.458 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.458 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.458 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.458 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.458 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.458 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.458 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.458 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.458 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.458 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.458 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.458 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.458 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.458 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.458 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.458 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.458 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:07.458 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:07.458 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:07.458 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:07.458 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:07.458 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:07.458 node0=1024 expecting 1024 00:06:07.458 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:06:07.458 08:48:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:06:07.458 00:06:07.458 real 0m1.038s 00:06:07.458 user 0m0.535s 00:06:07.458 sys 0m0.575s 00:06:07.458 08:48:14 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:07.458 08:48:14 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:06:07.458 ************************************ 00:06:07.458 END TEST no_shrink_alloc 00:06:07.458 ************************************ 00:06:07.458 08:48:14 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:06:07.458 08:48:14 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:06:07.458 08:48:14 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:06:07.458 08:48:14 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:07.458 08:48:14 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:06:07.458 08:48:14 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:07.458 08:48:14 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:06:07.458 08:48:14 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:06:07.458 08:48:14 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:06:07.458 ************************************ 00:06:07.458 END TEST hugepages 00:06:07.458 ************************************ 00:06:07.458 00:06:07.458 real 0m4.577s 00:06:07.458 user 0m2.182s 00:06:07.458 sys 0m2.495s 00:06:07.458 08:48:14 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:07.458 08:48:14 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:07.458 08:48:14 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:06:07.458 08:48:14 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:07.458 08:48:14 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:07.458 08:48:14 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:07.716 ************************************ 00:06:07.716 START TEST driver 00:06:07.716 ************************************ 00:06:07.716 08:48:14 setup.sh.driver -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:06:07.716 * Looking for test storage... 00:06:07.716 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:06:07.716 08:48:14 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:06:07.716 08:48:14 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:07.716 08:48:14 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:08.282 08:48:15 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:06:08.282 08:48:15 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:08.282 08:48:15 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:08.282 08:48:15 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:06:08.282 ************************************ 00:06:08.282 START TEST guess_driver 00:06:08.282 ************************************ 00:06:08.282 08:48:15 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:06:08.282 08:48:15 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:06:08.282 08:48:15 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:06:08.282 08:48:15 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:06:08.282 08:48:15 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:06:08.282 08:48:15 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:06:08.282 08:48:15 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:06:08.282 08:48:15 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:06:08.282 08:48:15 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:06:08.282 08:48:15 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:06:08.282 08:48:15 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:06:08.282 08:48:15 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:06:08.282 08:48:15 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:06:08.282 08:48:15 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:06:08.282 08:48:15 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:06:08.282 08:48:15 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:06:08.282 08:48:15 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:06:08.282 08:48:15 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:06:08.282 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:06:08.282 08:48:15 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:06:08.282 08:48:15 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:06:08.282 08:48:15 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:06:08.282 Looking for driver=uio_pci_generic 00:06:08.282 08:48:15 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:06:08.282 08:48:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:08.282 08:48:15 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:06:08.282 08:48:15 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:06:08.282 08:48:15 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:08.849 08:48:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:06:08.849 08:48:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:06:08.849 08:48:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:08.849 08:48:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:08.849 08:48:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:06:08.849 08:48:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:09.108 08:48:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:09.109 08:48:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:06:09.109 08:48:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:09.109 08:48:16 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:06:09.109 08:48:16 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:06:09.109 08:48:16 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:09.109 08:48:16 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:09.677 00:06:09.677 real 0m1.409s 00:06:09.677 user 0m0.519s 00:06:09.677 sys 0m0.888s 00:06:09.677 08:48:16 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:09.677 ************************************ 00:06:09.677 END TEST guess_driver 00:06:09.677 ************************************ 00:06:09.677 08:48:16 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:06:09.677 00:06:09.677 real 0m2.094s 00:06:09.677 user 0m0.745s 00:06:09.677 sys 0m1.399s 00:06:09.677 08:48:16 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:09.677 08:48:16 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:06:09.677 ************************************ 00:06:09.677 END TEST driver 00:06:09.677 ************************************ 00:06:09.677 08:48:16 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:06:09.677 08:48:16 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:09.677 08:48:16 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:09.677 08:48:16 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:09.677 ************************************ 00:06:09.677 START TEST devices 00:06:09.677 ************************************ 00:06:09.677 08:48:16 setup.sh.devices -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:06:09.677 * Looking for test storage... 00:06:09.677 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:06:09.677 08:48:16 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:06:09.677 08:48:16 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:06:09.677 08:48:16 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:09.677 08:48:16 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:10.611 08:48:17 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:06:10.611 08:48:17 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:06:10.611 08:48:17 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:06:10.611 08:48:17 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:06:10.611 08:48:17 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:10.611 08:48:17 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:06:10.611 08:48:17 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:06:10.611 08:48:17 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:10.611 08:48:17 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:10.611 08:48:17 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:10.611 08:48:17 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n2 00:06:10.611 08:48:17 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:06:10.611 08:48:17 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:06:10.611 08:48:17 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:10.611 08:48:17 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:10.611 08:48:17 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n3 00:06:10.611 08:48:17 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:06:10.611 08:48:17 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:06:10.611 08:48:17 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:10.611 08:48:17 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:10.611 08:48:17 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:06:10.611 08:48:17 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:06:10.611 08:48:17 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:10.611 08:48:17 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:10.611 08:48:17 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:06:10.611 08:48:17 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:06:10.611 08:48:17 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:06:10.611 08:48:17 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:06:10.611 08:48:17 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:06:10.611 08:48:17 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:06:10.611 08:48:17 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:06:10.611 08:48:17 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:06:10.611 08:48:17 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:06:10.611 08:48:17 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:06:10.611 08:48:17 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:06:10.611 08:48:17 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:06:10.611 08:48:17 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:06:10.611 No valid GPT data, bailing 00:06:10.611 08:48:17 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:10.611 08:48:17 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:06:10.611 08:48:17 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:06:10.611 08:48:17 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:06:10.611 08:48:17 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:10.611 08:48:17 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:10.611 08:48:17 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:06:10.611 08:48:17 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:06:10.611 08:48:17 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:06:10.611 08:48:17 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:06:10.611 08:48:17 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:06:10.611 08:48:17 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:06:10.611 08:48:17 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:06:10.611 08:48:17 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:06:10.611 08:48:17 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:06:10.611 08:48:17 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:06:10.611 08:48:17 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:06:10.611 08:48:17 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:06:10.611 No valid GPT data, bailing 00:06:10.611 08:48:17 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:06:10.611 08:48:17 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:06:10.611 08:48:17 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:06:10.611 08:48:17 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:06:10.611 08:48:17 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n2 00:06:10.611 08:48:17 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:06:10.611 08:48:17 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:06:10.611 08:48:17 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:06:10.611 08:48:17 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:06:10.611 08:48:17 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:06:10.611 08:48:17 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:06:10.611 08:48:17 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:06:10.611 08:48:17 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:06:10.611 08:48:17 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:06:10.611 08:48:17 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:06:10.611 08:48:17 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:06:10.611 08:48:17 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:06:10.611 08:48:17 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:06:10.611 No valid GPT data, bailing 00:06:10.611 08:48:17 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:06:10.611 08:48:17 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:06:10.611 08:48:17 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:06:10.611 08:48:17 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:06:10.611 08:48:17 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n3 00:06:10.611 08:48:17 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:06:10.611 08:48:17 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:06:10.611 08:48:17 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:06:10.611 08:48:17 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:06:10.611 08:48:17 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:06:10.869 08:48:17 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:06:10.869 08:48:17 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:06:10.869 08:48:17 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:06:10.869 08:48:17 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:06:10.869 08:48:17 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:06:10.869 08:48:17 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:06:10.869 08:48:17 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:06:10.869 08:48:17 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:06:10.869 No valid GPT data, bailing 00:06:10.869 08:48:17 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:06:10.869 08:48:17 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:06:10.869 08:48:17 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:06:10.869 08:48:17 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:06:10.869 08:48:17 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:06:10.869 08:48:17 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:06:10.869 08:48:17 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:06:10.869 08:48:17 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:06:10.870 08:48:17 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:06:10.870 08:48:17 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:06:10.870 08:48:17 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:06:10.870 08:48:17 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:06:10.870 08:48:17 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:06:10.870 08:48:17 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:10.870 08:48:17 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:10.870 08:48:17 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:06:10.870 ************************************ 00:06:10.870 START TEST nvme_mount 00:06:10.870 ************************************ 00:06:10.870 08:48:17 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:06:10.870 08:48:17 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:06:10.870 08:48:17 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:06:10.870 08:48:17 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:10.870 08:48:17 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:10.870 08:48:17 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:06:10.870 08:48:17 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:06:10.870 08:48:17 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:06:10.870 08:48:17 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:06:10.870 08:48:17 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:06:10.870 08:48:17 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:06:10.870 08:48:17 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:06:10.870 08:48:17 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:06:10.870 08:48:17 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:10.870 08:48:17 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:10.870 08:48:17 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:06:10.870 08:48:17 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:10.870 08:48:17 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:06:10.870 08:48:17 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:06:10.870 08:48:17 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:06:11.825 Creating new GPT entries in memory. 00:06:11.825 GPT data structures destroyed! You may now partition the disk using fdisk or 00:06:11.825 other utilities. 00:06:11.825 08:48:18 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:06:11.825 08:48:18 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:11.825 08:48:18 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:11.825 08:48:18 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:11.825 08:48:18 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:06:12.758 Creating new GPT entries in memory. 00:06:12.758 The operation has completed successfully. 00:06:12.758 08:48:19 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:06:12.758 08:48:19 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:12.758 08:48:19 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 57733 00:06:13.016 08:48:19 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:13.016 08:48:19 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:06:13.016 08:48:19 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:13.016 08:48:19 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:06:13.016 08:48:19 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:06:13.016 08:48:19 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:13.016 08:48:19 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:13.016 08:48:19 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:06:13.016 08:48:19 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:06:13.016 08:48:19 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:13.016 08:48:19 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:13.016 08:48:19 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:06:13.016 08:48:19 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:13.016 08:48:19 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:06:13.016 08:48:19 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:06:13.016 08:48:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:13.016 08:48:19 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:06:13.016 08:48:19 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:06:13.016 08:48:19 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:13.016 08:48:19 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:13.016 08:48:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:13.016 08:48:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:06:13.016 08:48:20 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:06:13.016 08:48:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:13.016 08:48:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:13.016 08:48:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:13.275 08:48:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:13.275 08:48:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:13.275 08:48:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:13.275 08:48:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:13.275 08:48:20 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:13.275 08:48:20 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:06:13.275 08:48:20 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:13.275 08:48:20 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:13.275 08:48:20 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:13.275 08:48:20 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:06:13.275 08:48:20 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:13.533 08:48:20 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:13.533 08:48:20 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:13.533 08:48:20 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:06:13.533 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:13.533 08:48:20 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:13.533 08:48:20 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:13.791 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:06:13.791 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:06:13.791 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:06:13.791 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:06:13.791 08:48:20 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:06:13.791 08:48:20 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:06:13.791 08:48:20 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:13.791 08:48:20 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:06:13.791 08:48:20 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:06:13.791 08:48:20 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:13.791 08:48:20 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:13.791 08:48:20 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:06:13.791 08:48:20 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:06:13.791 08:48:20 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:13.791 08:48:20 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:13.791 08:48:20 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:06:13.792 08:48:20 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:13.792 08:48:20 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:06:13.792 08:48:20 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:06:13.792 08:48:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:13.792 08:48:20 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:06:13.792 08:48:20 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:06:13.792 08:48:20 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:13.792 08:48:20 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:14.050 08:48:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:14.050 08:48:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:06:14.050 08:48:20 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:06:14.050 08:48:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:14.050 08:48:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:14.050 08:48:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:14.050 08:48:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:14.050 08:48:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:14.050 08:48:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:14.050 08:48:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:14.308 08:48:21 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:14.308 08:48:21 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:06:14.308 08:48:21 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:14.308 08:48:21 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:14.308 08:48:21 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:14.308 08:48:21 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:14.308 08:48:21 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:06:14.308 08:48:21 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:06:14.308 08:48:21 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:06:14.308 08:48:21 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:06:14.308 08:48:21 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:06:14.308 08:48:21 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:06:14.308 08:48:21 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:06:14.308 08:48:21 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:06:14.308 08:48:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:14.308 08:48:21 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:06:14.308 08:48:21 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:06:14.308 08:48:21 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:14.309 08:48:21 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:14.567 08:48:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:14.567 08:48:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:06:14.567 08:48:21 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:06:14.567 08:48:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:14.567 08:48:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:14.567 08:48:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:14.567 08:48:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:14.567 08:48:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:14.825 08:48:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:14.825 08:48:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:14.825 08:48:21 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:14.825 08:48:21 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:06:14.825 08:48:21 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:06:14.825 08:48:21 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:06:14.825 08:48:21 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:14.825 08:48:21 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:14.825 08:48:21 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:14.825 08:48:21 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:14.825 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:14.825 00:06:14.825 real 0m4.010s 00:06:14.825 user 0m0.702s 00:06:14.825 sys 0m1.047s 00:06:14.825 08:48:21 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:14.825 08:48:21 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:06:14.825 ************************************ 00:06:14.825 END TEST nvme_mount 00:06:14.825 ************************************ 00:06:14.825 08:48:21 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:06:14.825 08:48:21 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:14.825 08:48:21 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:14.825 08:48:21 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:06:14.825 ************************************ 00:06:14.825 START TEST dm_mount 00:06:14.825 ************************************ 00:06:14.825 08:48:21 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:06:14.825 08:48:21 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:06:14.825 08:48:21 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:06:14.825 08:48:21 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:06:14.825 08:48:21 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:06:14.825 08:48:21 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:06:14.825 08:48:21 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:06:14.825 08:48:21 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:06:14.825 08:48:21 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:06:14.825 08:48:21 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:06:14.825 08:48:21 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:06:14.825 08:48:21 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:06:14.825 08:48:21 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:14.825 08:48:21 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:14.826 08:48:21 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:06:14.826 08:48:21 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:14.826 08:48:21 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:14.826 08:48:21 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:06:14.826 08:48:21 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:14.826 08:48:21 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:06:14.826 08:48:21 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:06:14.826 08:48:21 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:06:16.199 Creating new GPT entries in memory. 00:06:16.199 GPT data structures destroyed! You may now partition the disk using fdisk or 00:06:16.199 other utilities. 00:06:16.199 08:48:22 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:06:16.199 08:48:22 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:16.199 08:48:22 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:16.199 08:48:22 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:16.199 08:48:22 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:06:17.130 Creating new GPT entries in memory. 00:06:17.130 The operation has completed successfully. 00:06:17.130 08:48:23 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:06:17.130 08:48:23 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:17.130 08:48:23 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:17.130 08:48:23 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:17.130 08:48:23 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:06:18.063 The operation has completed successfully. 00:06:18.063 08:48:24 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:06:18.063 08:48:24 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:18.063 08:48:24 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 58170 00:06:18.063 08:48:24 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:06:18.063 08:48:24 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:18.063 08:48:24 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:06:18.063 08:48:24 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:06:18.063 08:48:24 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:06:18.063 08:48:24 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:18.063 08:48:24 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:06:18.063 08:48:24 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:18.063 08:48:24 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:06:18.063 08:48:24 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:06:18.063 08:48:24 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:06:18.063 08:48:24 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:06:18.063 08:48:24 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:06:18.063 08:48:24 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:18.063 08:48:24 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:06:18.063 08:48:24 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:18.063 08:48:24 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:18.063 08:48:24 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:06:18.063 08:48:25 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:18.063 08:48:25 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:06:18.063 08:48:25 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:06:18.063 08:48:25 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:06:18.063 08:48:25 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:18.063 08:48:25 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:06:18.063 08:48:25 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:06:18.063 08:48:25 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:06:18.063 08:48:25 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:06:18.063 08:48:25 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:06:18.063 08:48:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:18.063 08:48:25 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:06:18.063 08:48:25 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:06:18.063 08:48:25 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:18.063 08:48:25 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:18.321 08:48:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:18.321 08:48:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:06:18.321 08:48:25 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:06:18.321 08:48:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:18.321 08:48:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:18.321 08:48:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:18.321 08:48:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:18.322 08:48:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:18.579 08:48:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:18.579 08:48:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:18.579 08:48:25 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:18.579 08:48:25 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:06:18.579 08:48:25 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:18.579 08:48:25 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:06:18.579 08:48:25 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:06:18.579 08:48:25 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:18.579 08:48:25 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:06:18.579 08:48:25 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:06:18.579 08:48:25 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:06:18.579 08:48:25 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:06:18.579 08:48:25 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:06:18.579 08:48:25 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:06:18.579 08:48:25 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:06:18.579 08:48:25 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:06:18.579 08:48:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:18.579 08:48:25 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:06:18.579 08:48:25 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:06:18.579 08:48:25 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:18.579 08:48:25 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:18.838 08:48:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:18.838 08:48:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:06:18.838 08:48:25 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:06:18.838 08:48:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:18.838 08:48:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:18.838 08:48:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:18.838 08:48:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:18.838 08:48:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.095 08:48:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:19.095 08:48:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.095 08:48:26 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:19.095 08:48:26 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:06:19.095 08:48:26 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:06:19.095 08:48:26 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:06:19.095 08:48:26 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:19.096 08:48:26 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:06:19.096 08:48:26 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:06:19.096 08:48:26 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:19.096 08:48:26 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:06:19.096 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:19.096 08:48:26 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:06:19.096 08:48:26 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:06:19.096 00:06:19.096 real 0m4.231s 00:06:19.096 user 0m0.455s 00:06:19.096 sys 0m0.734s 00:06:19.096 08:48:26 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:19.096 08:48:26 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:06:19.096 ************************************ 00:06:19.096 END TEST dm_mount 00:06:19.096 ************************************ 00:06:19.096 08:48:26 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:06:19.096 08:48:26 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:06:19.096 08:48:26 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:19.096 08:48:26 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:19.096 08:48:26 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:06:19.096 08:48:26 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:19.096 08:48:26 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:19.354 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:06:19.354 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:06:19.354 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:06:19.354 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:06:19.354 08:48:26 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:06:19.354 08:48:26 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:19.354 08:48:26 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:06:19.354 08:48:26 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:19.354 08:48:26 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:06:19.354 08:48:26 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:06:19.354 08:48:26 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:06:19.354 00:06:19.354 real 0m9.738s 00:06:19.354 user 0m1.780s 00:06:19.354 sys 0m2.360s 00:06:19.354 08:48:26 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:19.354 08:48:26 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:06:19.354 ************************************ 00:06:19.354 END TEST devices 00:06:19.354 ************************************ 00:06:19.612 ************************************ 00:06:19.612 END TEST setup.sh 00:06:19.612 ************************************ 00:06:19.612 00:06:19.612 real 0m21.461s 00:06:19.612 user 0m6.885s 00:06:19.612 sys 0m9.046s 00:06:19.612 08:48:26 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:19.612 08:48:26 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:19.612 08:48:26 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:20.178 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:20.178 Hugepages 00:06:20.178 node hugesize free / total 00:06:20.178 node0 1048576kB 0 / 0 00:06:20.178 node0 2048kB 2048 / 2048 00:06:20.178 00:06:20.178 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:20.178 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:20.436 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:06:20.436 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:06:20.436 08:48:27 -- spdk/autotest.sh@130 -- # uname -s 00:06:20.436 08:48:27 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:06:20.436 08:48:27 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:06:20.436 08:48:27 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:21.002 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:21.260 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:21.260 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:21.260 08:48:28 -- common/autotest_common.sh@1532 -- # sleep 1 00:06:22.251 08:48:29 -- common/autotest_common.sh@1533 -- # bdfs=() 00:06:22.251 08:48:29 -- common/autotest_common.sh@1533 -- # local bdfs 00:06:22.251 08:48:29 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:06:22.251 08:48:29 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:06:22.251 08:48:29 -- common/autotest_common.sh@1513 -- # bdfs=() 00:06:22.251 08:48:29 -- common/autotest_common.sh@1513 -- # local bdfs 00:06:22.251 08:48:29 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:22.251 08:48:29 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:22.251 08:48:29 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:06:22.251 08:48:29 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:06:22.251 08:48:29 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:22.251 08:48:29 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:22.817 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:22.817 Waiting for block devices as requested 00:06:22.817 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:22.817 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:22.817 08:48:29 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:06:22.817 08:48:29 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:06:22.817 08:48:29 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:06:22.817 08:48:29 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:06:22.817 08:48:29 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:22.817 08:48:29 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:06:22.817 08:48:29 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:22.817 08:48:29 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:06:22.817 08:48:29 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:06:22.817 08:48:29 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:06:22.817 08:48:29 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:06:22.817 08:48:29 -- common/autotest_common.sh@1545 -- # grep oacs 00:06:22.817 08:48:29 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:06:22.817 08:48:29 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:06:22.817 08:48:29 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:06:22.817 08:48:29 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:06:22.817 08:48:29 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:06:22.817 08:48:29 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:06:22.817 08:48:29 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:06:22.817 08:48:29 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:06:23.075 08:48:29 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:06:23.075 08:48:29 -- common/autotest_common.sh@1557 -- # continue 00:06:23.075 08:48:29 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:06:23.075 08:48:29 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:06:23.075 08:48:29 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:06:23.075 08:48:29 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:06:23.075 08:48:29 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:23.075 08:48:29 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:06:23.075 08:48:29 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:23.075 08:48:29 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:06:23.075 08:48:29 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:06:23.075 08:48:29 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:06:23.075 08:48:29 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:06:23.075 08:48:29 -- common/autotest_common.sh@1545 -- # grep oacs 00:06:23.075 08:48:29 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:06:23.075 08:48:29 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:06:23.075 08:48:29 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:06:23.075 08:48:29 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:06:23.075 08:48:29 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:06:23.075 08:48:29 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:06:23.075 08:48:29 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:06:23.075 08:48:29 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:06:23.075 08:48:29 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:06:23.075 08:48:29 -- common/autotest_common.sh@1557 -- # continue 00:06:23.075 08:48:29 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:06:23.075 08:48:29 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:23.075 08:48:29 -- common/autotest_common.sh@10 -- # set +x 00:06:23.075 08:48:30 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:06:23.075 08:48:30 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:23.075 08:48:30 -- common/autotest_common.sh@10 -- # set +x 00:06:23.075 08:48:30 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:23.641 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:23.641 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:23.898 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:23.898 08:48:30 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:06:23.898 08:48:30 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:23.898 08:48:30 -- common/autotest_common.sh@10 -- # set +x 00:06:23.898 08:48:30 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:06:23.898 08:48:30 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:06:23.899 08:48:30 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:06:23.899 08:48:30 -- common/autotest_common.sh@1577 -- # bdfs=() 00:06:23.899 08:48:30 -- common/autotest_common.sh@1577 -- # local bdfs 00:06:23.899 08:48:30 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:06:23.899 08:48:30 -- common/autotest_common.sh@1513 -- # bdfs=() 00:06:23.899 08:48:30 -- common/autotest_common.sh@1513 -- # local bdfs 00:06:23.899 08:48:30 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:23.899 08:48:30 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:23.899 08:48:30 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:06:23.899 08:48:30 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:06:23.899 08:48:30 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:23.899 08:48:30 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:06:23.899 08:48:30 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:06:23.899 08:48:30 -- common/autotest_common.sh@1580 -- # device=0x0010 00:06:23.899 08:48:30 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:23.899 08:48:30 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:06:23.899 08:48:30 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:06:23.899 08:48:30 -- common/autotest_common.sh@1580 -- # device=0x0010 00:06:23.899 08:48:30 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:23.899 08:48:30 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:06:23.899 08:48:30 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:06:23.899 08:48:30 -- common/autotest_common.sh@1593 -- # return 0 00:06:23.899 08:48:30 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:06:23.899 08:48:30 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:06:23.899 08:48:30 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:06:23.899 08:48:30 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:06:23.899 08:48:30 -- spdk/autotest.sh@162 -- # timing_enter lib 00:06:23.899 08:48:30 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:23.899 08:48:30 -- common/autotest_common.sh@10 -- # set +x 00:06:23.899 08:48:30 -- spdk/autotest.sh@164 -- # [[ 1 -eq 1 ]] 00:06:23.899 08:48:30 -- spdk/autotest.sh@165 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:06:23.899 08:48:30 -- spdk/autotest.sh@165 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:06:23.899 08:48:30 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:23.899 08:48:30 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:23.899 08:48:30 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:23.899 08:48:30 -- common/autotest_common.sh@10 -- # set +x 00:06:23.899 ************************************ 00:06:23.899 START TEST env 00:06:23.899 ************************************ 00:06:23.899 08:48:30 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:24.157 * Looking for test storage... 00:06:24.157 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:06:24.157 08:48:31 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:24.157 08:48:31 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:24.157 08:48:31 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:24.157 08:48:31 env -- common/autotest_common.sh@10 -- # set +x 00:06:24.157 ************************************ 00:06:24.157 START TEST env_memory 00:06:24.157 ************************************ 00:06:24.157 08:48:31 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:24.157 00:06:24.157 00:06:24.157 CUnit - A unit testing framework for C - Version 2.1-3 00:06:24.157 http://cunit.sourceforge.net/ 00:06:24.157 00:06:24.157 00:06:24.157 Suite: memory 00:06:24.157 Test: alloc and free memory map ...[2024-07-25 08:48:31.138114] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:24.157 passed 00:06:24.157 Test: mem map translation ...[2024-07-25 08:48:31.201865] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:24.157 [2024-07-25 08:48:31.201963] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:24.157 [2024-07-25 08:48:31.202058] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:24.157 [2024-07-25 08:48:31.202091] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:24.415 passed 00:06:24.415 Test: mem map registration ...[2024-07-25 08:48:31.300888] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:06:24.415 [2024-07-25 08:48:31.300976] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:06:24.415 passed 00:06:24.415 Test: mem map adjacent registrations ...passed 00:06:24.415 00:06:24.415 Run Summary: Type Total Ran Passed Failed Inactive 00:06:24.415 suites 1 1 n/a 0 0 00:06:24.415 tests 4 4 4 0 0 00:06:24.415 asserts 152 152 152 0 n/a 00:06:24.415 00:06:24.415 Elapsed time = 0.343 seconds 00:06:24.415 00:06:24.415 real 0m0.384s 00:06:24.415 user 0m0.349s 00:06:24.415 sys 0m0.026s 00:06:24.415 08:48:31 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:24.415 08:48:31 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:24.415 ************************************ 00:06:24.415 END TEST env_memory 00:06:24.415 ************************************ 00:06:24.415 08:48:31 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:24.415 08:48:31 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:24.415 08:48:31 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:24.415 08:48:31 env -- common/autotest_common.sh@10 -- # set +x 00:06:24.415 ************************************ 00:06:24.415 START TEST env_vtophys 00:06:24.415 ************************************ 00:06:24.415 08:48:31 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:24.673 EAL: lib.eal log level changed from notice to debug 00:06:24.673 EAL: Detected lcore 0 as core 0 on socket 0 00:06:24.673 EAL: Detected lcore 1 as core 0 on socket 0 00:06:24.673 EAL: Detected lcore 2 as core 0 on socket 0 00:06:24.673 EAL: Detected lcore 3 as core 0 on socket 0 00:06:24.673 EAL: Detected lcore 4 as core 0 on socket 0 00:06:24.673 EAL: Detected lcore 5 as core 0 on socket 0 00:06:24.673 EAL: Detected lcore 6 as core 0 on socket 0 00:06:24.673 EAL: Detected lcore 7 as core 0 on socket 0 00:06:24.673 EAL: Detected lcore 8 as core 0 on socket 0 00:06:24.673 EAL: Detected lcore 9 as core 0 on socket 0 00:06:24.673 EAL: Maximum logical cores by configuration: 128 00:06:24.673 EAL: Detected CPU lcores: 10 00:06:24.673 EAL: Detected NUMA nodes: 1 00:06:24.673 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:06:24.673 EAL: Detected shared linkage of DPDK 00:06:24.673 EAL: No shared files mode enabled, IPC will be disabled 00:06:24.673 EAL: Selected IOVA mode 'PA' 00:06:24.673 EAL: Probing VFIO support... 00:06:24.673 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:24.673 EAL: VFIO modules not loaded, skipping VFIO support... 00:06:24.673 EAL: Ask a virtual area of 0x2e000 bytes 00:06:24.673 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:24.673 EAL: Setting up physically contiguous memory... 00:06:24.673 EAL: Setting maximum number of open files to 524288 00:06:24.673 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:24.673 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:24.673 EAL: Ask a virtual area of 0x61000 bytes 00:06:24.673 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:24.673 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:24.673 EAL: Ask a virtual area of 0x400000000 bytes 00:06:24.673 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:24.673 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:24.673 EAL: Ask a virtual area of 0x61000 bytes 00:06:24.673 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:24.673 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:24.673 EAL: Ask a virtual area of 0x400000000 bytes 00:06:24.673 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:24.673 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:24.673 EAL: Ask a virtual area of 0x61000 bytes 00:06:24.673 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:24.673 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:24.673 EAL: Ask a virtual area of 0x400000000 bytes 00:06:24.673 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:24.673 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:24.673 EAL: Ask a virtual area of 0x61000 bytes 00:06:24.673 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:24.673 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:24.673 EAL: Ask a virtual area of 0x400000000 bytes 00:06:24.673 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:24.673 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:24.673 EAL: Hugepages will be freed exactly as allocated. 00:06:24.673 EAL: No shared files mode enabled, IPC is disabled 00:06:24.673 EAL: No shared files mode enabled, IPC is disabled 00:06:24.673 EAL: TSC frequency is ~2200000 KHz 00:06:24.674 EAL: Main lcore 0 is ready (tid=7fac1aacba40;cpuset=[0]) 00:06:24.674 EAL: Trying to obtain current memory policy. 00:06:24.674 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:24.674 EAL: Restoring previous memory policy: 0 00:06:24.674 EAL: request: mp_malloc_sync 00:06:24.674 EAL: No shared files mode enabled, IPC is disabled 00:06:24.674 EAL: Heap on socket 0 was expanded by 2MB 00:06:24.674 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:24.674 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:24.674 EAL: Mem event callback 'spdk:(nil)' registered 00:06:24.674 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:06:24.674 00:06:24.674 00:06:24.674 CUnit - A unit testing framework for C - Version 2.1-3 00:06:24.674 http://cunit.sourceforge.net/ 00:06:24.674 00:06:24.674 00:06:24.674 Suite: components_suite 00:06:25.241 Test: vtophys_malloc_test ...passed 00:06:25.241 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:25.241 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:25.241 EAL: Restoring previous memory policy: 4 00:06:25.241 EAL: Calling mem event callback 'spdk:(nil)' 00:06:25.241 EAL: request: mp_malloc_sync 00:06:25.241 EAL: No shared files mode enabled, IPC is disabled 00:06:25.241 EAL: Heap on socket 0 was expanded by 4MB 00:06:25.241 EAL: Calling mem event callback 'spdk:(nil)' 00:06:25.241 EAL: request: mp_malloc_sync 00:06:25.241 EAL: No shared files mode enabled, IPC is disabled 00:06:25.241 EAL: Heap on socket 0 was shrunk by 4MB 00:06:25.241 EAL: Trying to obtain current memory policy. 00:06:25.242 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:25.242 EAL: Restoring previous memory policy: 4 00:06:25.242 EAL: Calling mem event callback 'spdk:(nil)' 00:06:25.242 EAL: request: mp_malloc_sync 00:06:25.242 EAL: No shared files mode enabled, IPC is disabled 00:06:25.242 EAL: Heap on socket 0 was expanded by 6MB 00:06:25.242 EAL: Calling mem event callback 'spdk:(nil)' 00:06:25.242 EAL: request: mp_malloc_sync 00:06:25.242 EAL: No shared files mode enabled, IPC is disabled 00:06:25.242 EAL: Heap on socket 0 was shrunk by 6MB 00:06:25.242 EAL: Trying to obtain current memory policy. 00:06:25.242 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:25.242 EAL: Restoring previous memory policy: 4 00:06:25.242 EAL: Calling mem event callback 'spdk:(nil)' 00:06:25.242 EAL: request: mp_malloc_sync 00:06:25.242 EAL: No shared files mode enabled, IPC is disabled 00:06:25.242 EAL: Heap on socket 0 was expanded by 10MB 00:06:25.242 EAL: Calling mem event callback 'spdk:(nil)' 00:06:25.242 EAL: request: mp_malloc_sync 00:06:25.242 EAL: No shared files mode enabled, IPC is disabled 00:06:25.242 EAL: Heap on socket 0 was shrunk by 10MB 00:06:25.242 EAL: Trying to obtain current memory policy. 00:06:25.242 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:25.242 EAL: Restoring previous memory policy: 4 00:06:25.242 EAL: Calling mem event callback 'spdk:(nil)' 00:06:25.242 EAL: request: mp_malloc_sync 00:06:25.242 EAL: No shared files mode enabled, IPC is disabled 00:06:25.242 EAL: Heap on socket 0 was expanded by 18MB 00:06:25.242 EAL: Calling mem event callback 'spdk:(nil)' 00:06:25.242 EAL: request: mp_malloc_sync 00:06:25.242 EAL: No shared files mode enabled, IPC is disabled 00:06:25.242 EAL: Heap on socket 0 was shrunk by 18MB 00:06:25.242 EAL: Trying to obtain current memory policy. 00:06:25.242 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:25.242 EAL: Restoring previous memory policy: 4 00:06:25.242 EAL: Calling mem event callback 'spdk:(nil)' 00:06:25.242 EAL: request: mp_malloc_sync 00:06:25.242 EAL: No shared files mode enabled, IPC is disabled 00:06:25.242 EAL: Heap on socket 0 was expanded by 34MB 00:06:25.242 EAL: Calling mem event callback 'spdk:(nil)' 00:06:25.242 EAL: request: mp_malloc_sync 00:06:25.242 EAL: No shared files mode enabled, IPC is disabled 00:06:25.242 EAL: Heap on socket 0 was shrunk by 34MB 00:06:25.242 EAL: Trying to obtain current memory policy. 00:06:25.242 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:25.242 EAL: Restoring previous memory policy: 4 00:06:25.242 EAL: Calling mem event callback 'spdk:(nil)' 00:06:25.242 EAL: request: mp_malloc_sync 00:06:25.242 EAL: No shared files mode enabled, IPC is disabled 00:06:25.242 EAL: Heap on socket 0 was expanded by 66MB 00:06:25.500 EAL: Calling mem event callback 'spdk:(nil)' 00:06:25.500 EAL: request: mp_malloc_sync 00:06:25.500 EAL: No shared files mode enabled, IPC is disabled 00:06:25.500 EAL: Heap on socket 0 was shrunk by 66MB 00:06:25.500 EAL: Trying to obtain current memory policy. 00:06:25.500 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:25.500 EAL: Restoring previous memory policy: 4 00:06:25.500 EAL: Calling mem event callback 'spdk:(nil)' 00:06:25.500 EAL: request: mp_malloc_sync 00:06:25.500 EAL: No shared files mode enabled, IPC is disabled 00:06:25.500 EAL: Heap on socket 0 was expanded by 130MB 00:06:25.771 EAL: Calling mem event callback 'spdk:(nil)' 00:06:25.771 EAL: request: mp_malloc_sync 00:06:25.771 EAL: No shared files mode enabled, IPC is disabled 00:06:25.771 EAL: Heap on socket 0 was shrunk by 130MB 00:06:26.051 EAL: Trying to obtain current memory policy. 00:06:26.051 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:26.051 EAL: Restoring previous memory policy: 4 00:06:26.051 EAL: Calling mem event callback 'spdk:(nil)' 00:06:26.051 EAL: request: mp_malloc_sync 00:06:26.051 EAL: No shared files mode enabled, IPC is disabled 00:06:26.051 EAL: Heap on socket 0 was expanded by 258MB 00:06:26.310 EAL: Calling mem event callback 'spdk:(nil)' 00:06:26.571 EAL: request: mp_malloc_sync 00:06:26.571 EAL: No shared files mode enabled, IPC is disabled 00:06:26.571 EAL: Heap on socket 0 was shrunk by 258MB 00:06:26.828 EAL: Trying to obtain current memory policy. 00:06:26.828 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:27.085 EAL: Restoring previous memory policy: 4 00:06:27.085 EAL: Calling mem event callback 'spdk:(nil)' 00:06:27.085 EAL: request: mp_malloc_sync 00:06:27.085 EAL: No shared files mode enabled, IPC is disabled 00:06:27.085 EAL: Heap on socket 0 was expanded by 514MB 00:06:27.650 EAL: Calling mem event callback 'spdk:(nil)' 00:06:27.908 EAL: request: mp_malloc_sync 00:06:27.908 EAL: No shared files mode enabled, IPC is disabled 00:06:27.908 EAL: Heap on socket 0 was shrunk by 514MB 00:06:28.475 EAL: Trying to obtain current memory policy. 00:06:28.475 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:28.733 EAL: Restoring previous memory policy: 4 00:06:28.733 EAL: Calling mem event callback 'spdk:(nil)' 00:06:28.733 EAL: request: mp_malloc_sync 00:06:28.733 EAL: No shared files mode enabled, IPC is disabled 00:06:28.733 EAL: Heap on socket 0 was expanded by 1026MB 00:06:30.634 EAL: Calling mem event callback 'spdk:(nil)' 00:06:30.634 EAL: request: mp_malloc_sync 00:06:30.634 EAL: No shared files mode enabled, IPC is disabled 00:06:30.634 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:32.010 passed 00:06:32.010 00:06:32.010 Run Summary: Type Total Ran Passed Failed Inactive 00:06:32.010 suites 1 1 n/a 0 0 00:06:32.010 tests 2 2 2 0 0 00:06:32.010 asserts 5474 5474 5474 0 n/a 00:06:32.010 00:06:32.010 Elapsed time = 7.101 seconds 00:06:32.010 EAL: Calling mem event callback 'spdk:(nil)' 00:06:32.010 EAL: request: mp_malloc_sync 00:06:32.010 EAL: No shared files mode enabled, IPC is disabled 00:06:32.010 EAL: Heap on socket 0 was shrunk by 2MB 00:06:32.010 EAL: No shared files mode enabled, IPC is disabled 00:06:32.010 EAL: No shared files mode enabled, IPC is disabled 00:06:32.010 EAL: No shared files mode enabled, IPC is disabled 00:06:32.010 00:06:32.010 real 0m7.404s 00:06:32.010 user 0m6.275s 00:06:32.010 sys 0m0.965s 00:06:32.010 08:48:38 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:32.010 08:48:38 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:32.010 ************************************ 00:06:32.010 END TEST env_vtophys 00:06:32.010 ************************************ 00:06:32.010 08:48:38 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:32.010 08:48:38 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:32.010 08:48:38 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:32.010 08:48:38 env -- common/autotest_common.sh@10 -- # set +x 00:06:32.010 ************************************ 00:06:32.010 START TEST env_pci 00:06:32.010 ************************************ 00:06:32.010 08:48:38 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:32.010 00:06:32.010 00:06:32.010 CUnit - A unit testing framework for C - Version 2.1-3 00:06:32.010 http://cunit.sourceforge.net/ 00:06:32.010 00:06:32.010 00:06:32.010 Suite: pci 00:06:32.010 Test: pci_hook ...[2024-07-25 08:48:38.989874] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 59431 has claimed it 00:06:32.010 passed 00:06:32.010 00:06:32.010 Run Summary: Type Total Ran Passed Failed Inactive 00:06:32.010 suites 1 1 n/a 0 0 00:06:32.010 tests 1 1 1 0 0 00:06:32.010 asserts 25 25 25 0 n/a 00:06:32.010 00:06:32.010 Elapsed time = 0.006 seconds 00:06:32.010 EAL: Cannot find device (10000:00:01.0) 00:06:32.010 EAL: Failed to attach device on primary process 00:06:32.010 00:06:32.010 real 0m0.068s 00:06:32.010 user 0m0.035s 00:06:32.010 sys 0m0.032s 00:06:32.010 08:48:39 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:32.010 08:48:39 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:32.010 ************************************ 00:06:32.010 END TEST env_pci 00:06:32.010 ************************************ 00:06:32.010 08:48:39 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:32.010 08:48:39 env -- env/env.sh@15 -- # uname 00:06:32.010 08:48:39 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:32.010 08:48:39 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:32.010 08:48:39 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:32.010 08:48:39 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:32.010 08:48:39 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:32.010 08:48:39 env -- common/autotest_common.sh@10 -- # set +x 00:06:32.010 ************************************ 00:06:32.010 START TEST env_dpdk_post_init 00:06:32.010 ************************************ 00:06:32.010 08:48:39 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:32.269 EAL: Detected CPU lcores: 10 00:06:32.269 EAL: Detected NUMA nodes: 1 00:06:32.269 EAL: Detected shared linkage of DPDK 00:06:32.269 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:32.269 EAL: Selected IOVA mode 'PA' 00:06:32.269 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:32.269 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:06:32.269 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:06:32.269 Starting DPDK initialization... 00:06:32.269 Starting SPDK post initialization... 00:06:32.269 SPDK NVMe probe 00:06:32.269 Attaching to 0000:00:10.0 00:06:32.269 Attaching to 0000:00:11.0 00:06:32.269 Attached to 0000:00:10.0 00:06:32.269 Attached to 0000:00:11.0 00:06:32.269 Cleaning up... 00:06:32.269 00:06:32.269 real 0m0.293s 00:06:32.269 user 0m0.102s 00:06:32.269 sys 0m0.090s 00:06:32.269 08:48:39 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:32.269 08:48:39 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:32.269 ************************************ 00:06:32.269 END TEST env_dpdk_post_init 00:06:32.269 ************************************ 00:06:32.527 08:48:39 env -- env/env.sh@26 -- # uname 00:06:32.527 08:48:39 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:32.527 08:48:39 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:32.527 08:48:39 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:32.527 08:48:39 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:32.527 08:48:39 env -- common/autotest_common.sh@10 -- # set +x 00:06:32.527 ************************************ 00:06:32.527 START TEST env_mem_callbacks 00:06:32.527 ************************************ 00:06:32.527 08:48:39 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:32.527 EAL: Detected CPU lcores: 10 00:06:32.527 EAL: Detected NUMA nodes: 1 00:06:32.527 EAL: Detected shared linkage of DPDK 00:06:32.527 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:32.527 EAL: Selected IOVA mode 'PA' 00:06:32.527 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:32.527 00:06:32.527 00:06:32.527 CUnit - A unit testing framework for C - Version 2.1-3 00:06:32.527 http://cunit.sourceforge.net/ 00:06:32.527 00:06:32.527 00:06:32.527 Suite: memory 00:06:32.527 Test: test ... 00:06:32.527 register 0x200000200000 2097152 00:06:32.527 malloc 3145728 00:06:32.527 register 0x200000400000 4194304 00:06:32.527 buf 0x2000004fffc0 len 3145728 PASSED 00:06:32.527 malloc 64 00:06:32.527 buf 0x2000004ffec0 len 64 PASSED 00:06:32.527 malloc 4194304 00:06:32.527 register 0x200000800000 6291456 00:06:32.527 buf 0x2000009fffc0 len 4194304 PASSED 00:06:32.527 free 0x2000004fffc0 3145728 00:06:32.786 free 0x2000004ffec0 64 00:06:32.786 unregister 0x200000400000 4194304 PASSED 00:06:32.786 free 0x2000009fffc0 4194304 00:06:32.786 unregister 0x200000800000 6291456 PASSED 00:06:32.786 malloc 8388608 00:06:32.786 register 0x200000400000 10485760 00:06:32.786 buf 0x2000005fffc0 len 8388608 PASSED 00:06:32.786 free 0x2000005fffc0 8388608 00:06:32.786 unregister 0x200000400000 10485760 PASSED 00:06:32.786 passed 00:06:32.786 00:06:32.786 Run Summary: Type Total Ran Passed Failed Inactive 00:06:32.786 suites 1 1 n/a 0 0 00:06:32.786 tests 1 1 1 0 0 00:06:32.786 asserts 15 15 15 0 n/a 00:06:32.786 00:06:32.786 Elapsed time = 0.077 seconds 00:06:32.786 00:06:32.786 real 0m0.291s 00:06:32.786 user 0m0.107s 00:06:32.786 sys 0m0.082s 00:06:32.786 08:48:39 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:32.786 08:48:39 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:32.786 ************************************ 00:06:32.786 END TEST env_mem_callbacks 00:06:32.786 ************************************ 00:06:32.786 00:06:32.786 real 0m8.796s 00:06:32.786 user 0m6.986s 00:06:32.786 sys 0m1.416s 00:06:32.786 08:48:39 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:32.786 08:48:39 env -- common/autotest_common.sh@10 -- # set +x 00:06:32.786 ************************************ 00:06:32.786 END TEST env 00:06:32.786 ************************************ 00:06:32.786 08:48:39 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:32.786 08:48:39 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:32.786 08:48:39 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:32.786 08:48:39 -- common/autotest_common.sh@10 -- # set +x 00:06:32.786 ************************************ 00:06:32.786 START TEST rpc 00:06:32.786 ************************************ 00:06:32.786 08:48:39 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:32.786 * Looking for test storage... 00:06:33.044 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:33.044 08:48:39 rpc -- rpc/rpc.sh@65 -- # spdk_pid=59550 00:06:33.044 08:48:39 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:33.044 08:48:39 rpc -- rpc/rpc.sh@67 -- # waitforlisten 59550 00:06:33.044 08:48:39 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:06:33.044 08:48:39 rpc -- common/autotest_common.sh@831 -- # '[' -z 59550 ']' 00:06:33.044 08:48:39 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.044 08:48:39 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:33.044 08:48:39 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.044 08:48:39 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:33.044 08:48:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.044 [2024-07-25 08:48:40.043791] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:33.044 [2024-07-25 08:48:40.043981] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59550 ] 00:06:33.303 [2024-07-25 08:48:40.220749] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.562 [2024-07-25 08:48:40.499586] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:33.562 [2024-07-25 08:48:40.499642] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 59550' to capture a snapshot of events at runtime. 00:06:33.562 [2024-07-25 08:48:40.499675] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:33.562 [2024-07-25 08:48:40.499689] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:33.562 [2024-07-25 08:48:40.499706] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid59550 for offline analysis/debug. 00:06:33.562 [2024-07-25 08:48:40.499751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.820 [2024-07-25 08:48:40.709062] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:34.386 08:48:41 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:34.386 08:48:41 rpc -- common/autotest_common.sh@864 -- # return 0 00:06:34.386 08:48:41 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:34.386 08:48:41 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:34.386 08:48:41 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:34.386 08:48:41 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:34.386 08:48:41 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:34.386 08:48:41 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:34.386 08:48:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.386 ************************************ 00:06:34.386 START TEST rpc_integrity 00:06:34.386 ************************************ 00:06:34.386 08:48:41 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:34.386 08:48:41 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:34.386 08:48:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.386 08:48:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:34.386 08:48:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.386 08:48:41 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:34.386 08:48:41 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:34.386 08:48:41 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:34.386 08:48:41 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:34.386 08:48:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.386 08:48:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:34.386 08:48:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.386 08:48:41 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:34.386 08:48:41 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:34.386 08:48:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.386 08:48:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:34.386 08:48:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.386 08:48:41 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:34.386 { 00:06:34.386 "name": "Malloc0", 00:06:34.386 "aliases": [ 00:06:34.386 "e8d885aa-8d32-4142-8543-2defb622ed19" 00:06:34.386 ], 00:06:34.386 "product_name": "Malloc disk", 00:06:34.386 "block_size": 512, 00:06:34.386 "num_blocks": 16384, 00:06:34.386 "uuid": "e8d885aa-8d32-4142-8543-2defb622ed19", 00:06:34.386 "assigned_rate_limits": { 00:06:34.386 "rw_ios_per_sec": 0, 00:06:34.386 "rw_mbytes_per_sec": 0, 00:06:34.386 "r_mbytes_per_sec": 0, 00:06:34.386 "w_mbytes_per_sec": 0 00:06:34.386 }, 00:06:34.386 "claimed": false, 00:06:34.386 "zoned": false, 00:06:34.386 "supported_io_types": { 00:06:34.386 "read": true, 00:06:34.386 "write": true, 00:06:34.386 "unmap": true, 00:06:34.386 "flush": true, 00:06:34.386 "reset": true, 00:06:34.386 "nvme_admin": false, 00:06:34.386 "nvme_io": false, 00:06:34.386 "nvme_io_md": false, 00:06:34.386 "write_zeroes": true, 00:06:34.387 "zcopy": true, 00:06:34.387 "get_zone_info": false, 00:06:34.387 "zone_management": false, 00:06:34.387 "zone_append": false, 00:06:34.387 "compare": false, 00:06:34.387 "compare_and_write": false, 00:06:34.387 "abort": true, 00:06:34.387 "seek_hole": false, 00:06:34.387 "seek_data": false, 00:06:34.387 "copy": true, 00:06:34.387 "nvme_iov_md": false 00:06:34.387 }, 00:06:34.387 "memory_domains": [ 00:06:34.387 { 00:06:34.387 "dma_device_id": "system", 00:06:34.387 "dma_device_type": 1 00:06:34.387 }, 00:06:34.387 { 00:06:34.387 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:34.387 "dma_device_type": 2 00:06:34.387 } 00:06:34.387 ], 00:06:34.387 "driver_specific": {} 00:06:34.387 } 00:06:34.387 ]' 00:06:34.387 08:48:41 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:34.387 08:48:41 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:34.387 08:48:41 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:34.387 08:48:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.387 08:48:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:34.387 [2024-07-25 08:48:41.470065] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:34.387 [2024-07-25 08:48:41.470149] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:34.387 [2024-07-25 08:48:41.470208] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:06:34.387 [2024-07-25 08:48:41.470257] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:34.387 [2024-07-25 08:48:41.473359] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:34.387 [2024-07-25 08:48:41.473437] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:34.387 Passthru0 00:06:34.387 08:48:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.387 08:48:41 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:34.387 08:48:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.387 08:48:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:34.646 08:48:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.646 08:48:41 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:34.646 { 00:06:34.646 "name": "Malloc0", 00:06:34.646 "aliases": [ 00:06:34.646 "e8d885aa-8d32-4142-8543-2defb622ed19" 00:06:34.646 ], 00:06:34.646 "product_name": "Malloc disk", 00:06:34.646 "block_size": 512, 00:06:34.646 "num_blocks": 16384, 00:06:34.646 "uuid": "e8d885aa-8d32-4142-8543-2defb622ed19", 00:06:34.646 "assigned_rate_limits": { 00:06:34.646 "rw_ios_per_sec": 0, 00:06:34.646 "rw_mbytes_per_sec": 0, 00:06:34.646 "r_mbytes_per_sec": 0, 00:06:34.646 "w_mbytes_per_sec": 0 00:06:34.646 }, 00:06:34.646 "claimed": true, 00:06:34.646 "claim_type": "exclusive_write", 00:06:34.646 "zoned": false, 00:06:34.646 "supported_io_types": { 00:06:34.646 "read": true, 00:06:34.646 "write": true, 00:06:34.646 "unmap": true, 00:06:34.646 "flush": true, 00:06:34.646 "reset": true, 00:06:34.646 "nvme_admin": false, 00:06:34.646 "nvme_io": false, 00:06:34.646 "nvme_io_md": false, 00:06:34.646 "write_zeroes": true, 00:06:34.646 "zcopy": true, 00:06:34.646 "get_zone_info": false, 00:06:34.646 "zone_management": false, 00:06:34.646 "zone_append": false, 00:06:34.646 "compare": false, 00:06:34.646 "compare_and_write": false, 00:06:34.646 "abort": true, 00:06:34.646 "seek_hole": false, 00:06:34.646 "seek_data": false, 00:06:34.646 "copy": true, 00:06:34.646 "nvme_iov_md": false 00:06:34.646 }, 00:06:34.646 "memory_domains": [ 00:06:34.646 { 00:06:34.646 "dma_device_id": "system", 00:06:34.646 "dma_device_type": 1 00:06:34.646 }, 00:06:34.646 { 00:06:34.646 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:34.646 "dma_device_type": 2 00:06:34.646 } 00:06:34.646 ], 00:06:34.646 "driver_specific": {} 00:06:34.646 }, 00:06:34.646 { 00:06:34.646 "name": "Passthru0", 00:06:34.646 "aliases": [ 00:06:34.646 "6757d9a8-613e-5f8b-b094-c901a4c89841" 00:06:34.646 ], 00:06:34.646 "product_name": "passthru", 00:06:34.646 "block_size": 512, 00:06:34.646 "num_blocks": 16384, 00:06:34.646 "uuid": "6757d9a8-613e-5f8b-b094-c901a4c89841", 00:06:34.646 "assigned_rate_limits": { 00:06:34.646 "rw_ios_per_sec": 0, 00:06:34.646 "rw_mbytes_per_sec": 0, 00:06:34.646 "r_mbytes_per_sec": 0, 00:06:34.646 "w_mbytes_per_sec": 0 00:06:34.646 }, 00:06:34.646 "claimed": false, 00:06:34.646 "zoned": false, 00:06:34.646 "supported_io_types": { 00:06:34.646 "read": true, 00:06:34.646 "write": true, 00:06:34.646 "unmap": true, 00:06:34.646 "flush": true, 00:06:34.646 "reset": true, 00:06:34.646 "nvme_admin": false, 00:06:34.646 "nvme_io": false, 00:06:34.646 "nvme_io_md": false, 00:06:34.646 "write_zeroes": true, 00:06:34.646 "zcopy": true, 00:06:34.646 "get_zone_info": false, 00:06:34.646 "zone_management": false, 00:06:34.646 "zone_append": false, 00:06:34.646 "compare": false, 00:06:34.646 "compare_and_write": false, 00:06:34.646 "abort": true, 00:06:34.646 "seek_hole": false, 00:06:34.646 "seek_data": false, 00:06:34.646 "copy": true, 00:06:34.646 "nvme_iov_md": false 00:06:34.646 }, 00:06:34.646 "memory_domains": [ 00:06:34.646 { 00:06:34.646 "dma_device_id": "system", 00:06:34.646 "dma_device_type": 1 00:06:34.646 }, 00:06:34.646 { 00:06:34.646 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:34.646 "dma_device_type": 2 00:06:34.646 } 00:06:34.646 ], 00:06:34.646 "driver_specific": { 00:06:34.646 "passthru": { 00:06:34.646 "name": "Passthru0", 00:06:34.646 "base_bdev_name": "Malloc0" 00:06:34.646 } 00:06:34.646 } 00:06:34.646 } 00:06:34.646 ]' 00:06:34.646 08:48:41 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:34.646 08:48:41 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:34.646 08:48:41 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:34.646 08:48:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.646 08:48:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:34.646 08:48:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.646 08:48:41 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:34.646 08:48:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.646 08:48:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:34.646 08:48:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.647 08:48:41 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:34.647 08:48:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.647 08:48:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:34.647 08:48:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.647 08:48:41 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:34.647 08:48:41 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:34.647 08:48:41 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:34.647 00:06:34.647 real 0m0.371s 00:06:34.647 user 0m0.221s 00:06:34.647 sys 0m0.042s 00:06:34.647 ************************************ 00:06:34.647 END TEST rpc_integrity 00:06:34.647 ************************************ 00:06:34.647 08:48:41 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:34.647 08:48:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:34.647 08:48:41 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:34.647 08:48:41 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:34.647 08:48:41 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:34.647 08:48:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.647 ************************************ 00:06:34.647 START TEST rpc_plugins 00:06:34.647 ************************************ 00:06:34.647 08:48:41 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:06:34.647 08:48:41 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:34.647 08:48:41 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.647 08:48:41 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:34.647 08:48:41 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.647 08:48:41 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:34.647 08:48:41 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:34.647 08:48:41 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.647 08:48:41 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:34.906 08:48:41 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.906 08:48:41 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:34.906 { 00:06:34.906 "name": "Malloc1", 00:06:34.906 "aliases": [ 00:06:34.906 "38350d95-0903-4fae-81bd-710a62e1f44b" 00:06:34.906 ], 00:06:34.906 "product_name": "Malloc disk", 00:06:34.906 "block_size": 4096, 00:06:34.906 "num_blocks": 256, 00:06:34.906 "uuid": "38350d95-0903-4fae-81bd-710a62e1f44b", 00:06:34.906 "assigned_rate_limits": { 00:06:34.906 "rw_ios_per_sec": 0, 00:06:34.906 "rw_mbytes_per_sec": 0, 00:06:34.906 "r_mbytes_per_sec": 0, 00:06:34.906 "w_mbytes_per_sec": 0 00:06:34.906 }, 00:06:34.906 "claimed": false, 00:06:34.906 "zoned": false, 00:06:34.906 "supported_io_types": { 00:06:34.906 "read": true, 00:06:34.906 "write": true, 00:06:34.906 "unmap": true, 00:06:34.906 "flush": true, 00:06:34.906 "reset": true, 00:06:34.906 "nvme_admin": false, 00:06:34.906 "nvme_io": false, 00:06:34.906 "nvme_io_md": false, 00:06:34.906 "write_zeroes": true, 00:06:34.906 "zcopy": true, 00:06:34.906 "get_zone_info": false, 00:06:34.906 "zone_management": false, 00:06:34.906 "zone_append": false, 00:06:34.906 "compare": false, 00:06:34.906 "compare_and_write": false, 00:06:34.906 "abort": true, 00:06:34.906 "seek_hole": false, 00:06:34.906 "seek_data": false, 00:06:34.906 "copy": true, 00:06:34.906 "nvme_iov_md": false 00:06:34.906 }, 00:06:34.906 "memory_domains": [ 00:06:34.906 { 00:06:34.906 "dma_device_id": "system", 00:06:34.906 "dma_device_type": 1 00:06:34.906 }, 00:06:34.906 { 00:06:34.906 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:34.906 "dma_device_type": 2 00:06:34.906 } 00:06:34.906 ], 00:06:34.906 "driver_specific": {} 00:06:34.906 } 00:06:34.906 ]' 00:06:34.906 08:48:41 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:34.906 08:48:41 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:34.906 08:48:41 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:34.906 08:48:41 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.906 08:48:41 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:34.906 08:48:41 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.906 08:48:41 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:34.906 08:48:41 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.906 08:48:41 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:34.906 08:48:41 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.906 08:48:41 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:34.906 08:48:41 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:34.906 08:48:41 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:34.906 00:06:34.906 real 0m0.162s 00:06:34.906 user 0m0.110s 00:06:34.906 sys 0m0.013s 00:06:34.906 ************************************ 00:06:34.906 END TEST rpc_plugins 00:06:34.906 ************************************ 00:06:34.906 08:48:41 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:34.906 08:48:41 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:34.906 08:48:41 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:34.906 08:48:41 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:34.906 08:48:41 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:34.906 08:48:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.906 ************************************ 00:06:34.906 START TEST rpc_trace_cmd_test 00:06:34.906 ************************************ 00:06:34.906 08:48:41 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:06:34.906 08:48:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:34.906 08:48:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:34.906 08:48:41 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.906 08:48:41 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.906 08:48:41 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.906 08:48:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:34.906 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid59550", 00:06:34.906 "tpoint_group_mask": "0x8", 00:06:34.906 "iscsi_conn": { 00:06:34.906 "mask": "0x2", 00:06:34.906 "tpoint_mask": "0x0" 00:06:34.906 }, 00:06:34.906 "scsi": { 00:06:34.906 "mask": "0x4", 00:06:34.906 "tpoint_mask": "0x0" 00:06:34.906 }, 00:06:34.906 "bdev": { 00:06:34.906 "mask": "0x8", 00:06:34.906 "tpoint_mask": "0xffffffffffffffff" 00:06:34.906 }, 00:06:34.906 "nvmf_rdma": { 00:06:34.906 "mask": "0x10", 00:06:34.906 "tpoint_mask": "0x0" 00:06:34.906 }, 00:06:34.906 "nvmf_tcp": { 00:06:34.906 "mask": "0x20", 00:06:34.906 "tpoint_mask": "0x0" 00:06:34.906 }, 00:06:34.906 "ftl": { 00:06:34.906 "mask": "0x40", 00:06:34.906 "tpoint_mask": "0x0" 00:06:34.906 }, 00:06:34.906 "blobfs": { 00:06:34.906 "mask": "0x80", 00:06:34.906 "tpoint_mask": "0x0" 00:06:34.906 }, 00:06:34.906 "dsa": { 00:06:34.906 "mask": "0x200", 00:06:34.906 "tpoint_mask": "0x0" 00:06:34.906 }, 00:06:34.906 "thread": { 00:06:34.906 "mask": "0x400", 00:06:34.906 "tpoint_mask": "0x0" 00:06:34.906 }, 00:06:34.906 "nvme_pcie": { 00:06:34.906 "mask": "0x800", 00:06:34.906 "tpoint_mask": "0x0" 00:06:34.906 }, 00:06:34.906 "iaa": { 00:06:34.906 "mask": "0x1000", 00:06:34.906 "tpoint_mask": "0x0" 00:06:34.906 }, 00:06:34.906 "nvme_tcp": { 00:06:34.906 "mask": "0x2000", 00:06:34.906 "tpoint_mask": "0x0" 00:06:34.906 }, 00:06:34.906 "bdev_nvme": { 00:06:34.906 "mask": "0x4000", 00:06:34.906 "tpoint_mask": "0x0" 00:06:34.906 }, 00:06:34.906 "sock": { 00:06:34.906 "mask": "0x8000", 00:06:34.906 "tpoint_mask": "0x0" 00:06:34.906 } 00:06:34.906 }' 00:06:34.906 08:48:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:34.906 08:48:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:06:34.906 08:48:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:35.165 08:48:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:35.165 08:48:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:35.165 08:48:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:35.165 08:48:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:35.165 08:48:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:35.165 08:48:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:35.165 08:48:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:35.165 00:06:35.165 real 0m0.264s 00:06:35.165 user 0m0.236s 00:06:35.165 sys 0m0.022s 00:06:35.165 ************************************ 00:06:35.165 END TEST rpc_trace_cmd_test 00:06:35.165 ************************************ 00:06:35.165 08:48:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:35.165 08:48:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.165 08:48:42 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:35.165 08:48:42 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:35.165 08:48:42 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:35.165 08:48:42 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:35.165 08:48:42 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:35.165 08:48:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:35.165 ************************************ 00:06:35.165 START TEST rpc_daemon_integrity 00:06:35.165 ************************************ 00:06:35.165 08:48:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:35.165 08:48:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:35.165 08:48:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.165 08:48:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:35.165 08:48:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.165 08:48:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:35.165 08:48:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:35.424 08:48:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:35.424 08:48:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:35.424 08:48:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.424 08:48:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:35.424 08:48:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.424 08:48:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:35.424 08:48:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:35.424 08:48:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.424 08:48:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:35.424 08:48:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.424 08:48:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:35.424 { 00:06:35.424 "name": "Malloc2", 00:06:35.424 "aliases": [ 00:06:35.424 "16469df8-58c8-469a-bd1c-bcee5e9fdde1" 00:06:35.424 ], 00:06:35.424 "product_name": "Malloc disk", 00:06:35.424 "block_size": 512, 00:06:35.424 "num_blocks": 16384, 00:06:35.424 "uuid": "16469df8-58c8-469a-bd1c-bcee5e9fdde1", 00:06:35.424 "assigned_rate_limits": { 00:06:35.424 "rw_ios_per_sec": 0, 00:06:35.424 "rw_mbytes_per_sec": 0, 00:06:35.424 "r_mbytes_per_sec": 0, 00:06:35.424 "w_mbytes_per_sec": 0 00:06:35.424 }, 00:06:35.424 "claimed": false, 00:06:35.424 "zoned": false, 00:06:35.424 "supported_io_types": { 00:06:35.424 "read": true, 00:06:35.424 "write": true, 00:06:35.424 "unmap": true, 00:06:35.424 "flush": true, 00:06:35.424 "reset": true, 00:06:35.424 "nvme_admin": false, 00:06:35.424 "nvme_io": false, 00:06:35.424 "nvme_io_md": false, 00:06:35.424 "write_zeroes": true, 00:06:35.424 "zcopy": true, 00:06:35.424 "get_zone_info": false, 00:06:35.424 "zone_management": false, 00:06:35.424 "zone_append": false, 00:06:35.424 "compare": false, 00:06:35.424 "compare_and_write": false, 00:06:35.424 "abort": true, 00:06:35.424 "seek_hole": false, 00:06:35.424 "seek_data": false, 00:06:35.424 "copy": true, 00:06:35.424 "nvme_iov_md": false 00:06:35.424 }, 00:06:35.424 "memory_domains": [ 00:06:35.424 { 00:06:35.424 "dma_device_id": "system", 00:06:35.424 "dma_device_type": 1 00:06:35.424 }, 00:06:35.424 { 00:06:35.424 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:35.424 "dma_device_type": 2 00:06:35.424 } 00:06:35.424 ], 00:06:35.424 "driver_specific": {} 00:06:35.424 } 00:06:35.424 ]' 00:06:35.424 08:48:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:35.424 08:48:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:35.424 08:48:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:35.424 08:48:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.424 08:48:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:35.424 [2024-07-25 08:48:42.414089] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:35.424 [2024-07-25 08:48:42.414163] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:35.425 [2024-07-25 08:48:42.414221] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:06:35.425 [2024-07-25 08:48:42.414273] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:35.425 [2024-07-25 08:48:42.417262] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:35.425 [2024-07-25 08:48:42.417336] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:35.425 Passthru0 00:06:35.425 08:48:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.425 08:48:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:35.425 08:48:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.425 08:48:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:35.425 08:48:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.425 08:48:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:35.425 { 00:06:35.425 "name": "Malloc2", 00:06:35.425 "aliases": [ 00:06:35.425 "16469df8-58c8-469a-bd1c-bcee5e9fdde1" 00:06:35.425 ], 00:06:35.425 "product_name": "Malloc disk", 00:06:35.425 "block_size": 512, 00:06:35.425 "num_blocks": 16384, 00:06:35.425 "uuid": "16469df8-58c8-469a-bd1c-bcee5e9fdde1", 00:06:35.425 "assigned_rate_limits": { 00:06:35.425 "rw_ios_per_sec": 0, 00:06:35.425 "rw_mbytes_per_sec": 0, 00:06:35.425 "r_mbytes_per_sec": 0, 00:06:35.425 "w_mbytes_per_sec": 0 00:06:35.425 }, 00:06:35.425 "claimed": true, 00:06:35.425 "claim_type": "exclusive_write", 00:06:35.425 "zoned": false, 00:06:35.425 "supported_io_types": { 00:06:35.425 "read": true, 00:06:35.425 "write": true, 00:06:35.425 "unmap": true, 00:06:35.425 "flush": true, 00:06:35.425 "reset": true, 00:06:35.425 "nvme_admin": false, 00:06:35.425 "nvme_io": false, 00:06:35.425 "nvme_io_md": false, 00:06:35.425 "write_zeroes": true, 00:06:35.425 "zcopy": true, 00:06:35.425 "get_zone_info": false, 00:06:35.425 "zone_management": false, 00:06:35.425 "zone_append": false, 00:06:35.425 "compare": false, 00:06:35.425 "compare_and_write": false, 00:06:35.425 "abort": true, 00:06:35.425 "seek_hole": false, 00:06:35.425 "seek_data": false, 00:06:35.425 "copy": true, 00:06:35.425 "nvme_iov_md": false 00:06:35.425 }, 00:06:35.425 "memory_domains": [ 00:06:35.425 { 00:06:35.425 "dma_device_id": "system", 00:06:35.425 "dma_device_type": 1 00:06:35.425 }, 00:06:35.425 { 00:06:35.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:35.425 "dma_device_type": 2 00:06:35.425 } 00:06:35.425 ], 00:06:35.425 "driver_specific": {} 00:06:35.425 }, 00:06:35.425 { 00:06:35.425 "name": "Passthru0", 00:06:35.425 "aliases": [ 00:06:35.425 "e365c764-18cf-54a5-93e2-c7f2e740f3a5" 00:06:35.425 ], 00:06:35.425 "product_name": "passthru", 00:06:35.425 "block_size": 512, 00:06:35.425 "num_blocks": 16384, 00:06:35.425 "uuid": "e365c764-18cf-54a5-93e2-c7f2e740f3a5", 00:06:35.425 "assigned_rate_limits": { 00:06:35.425 "rw_ios_per_sec": 0, 00:06:35.425 "rw_mbytes_per_sec": 0, 00:06:35.425 "r_mbytes_per_sec": 0, 00:06:35.425 "w_mbytes_per_sec": 0 00:06:35.425 }, 00:06:35.425 "claimed": false, 00:06:35.425 "zoned": false, 00:06:35.425 "supported_io_types": { 00:06:35.425 "read": true, 00:06:35.425 "write": true, 00:06:35.425 "unmap": true, 00:06:35.425 "flush": true, 00:06:35.425 "reset": true, 00:06:35.425 "nvme_admin": false, 00:06:35.425 "nvme_io": false, 00:06:35.425 "nvme_io_md": false, 00:06:35.425 "write_zeroes": true, 00:06:35.425 "zcopy": true, 00:06:35.425 "get_zone_info": false, 00:06:35.425 "zone_management": false, 00:06:35.425 "zone_append": false, 00:06:35.425 "compare": false, 00:06:35.425 "compare_and_write": false, 00:06:35.425 "abort": true, 00:06:35.425 "seek_hole": false, 00:06:35.425 "seek_data": false, 00:06:35.425 "copy": true, 00:06:35.425 "nvme_iov_md": false 00:06:35.425 }, 00:06:35.425 "memory_domains": [ 00:06:35.425 { 00:06:35.425 "dma_device_id": "system", 00:06:35.425 "dma_device_type": 1 00:06:35.425 }, 00:06:35.425 { 00:06:35.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:35.425 "dma_device_type": 2 00:06:35.425 } 00:06:35.425 ], 00:06:35.425 "driver_specific": { 00:06:35.425 "passthru": { 00:06:35.425 "name": "Passthru0", 00:06:35.425 "base_bdev_name": "Malloc2" 00:06:35.425 } 00:06:35.425 } 00:06:35.425 } 00:06:35.425 ]' 00:06:35.425 08:48:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:35.425 08:48:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:35.425 08:48:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:35.425 08:48:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.425 08:48:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:35.425 08:48:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.425 08:48:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:35.425 08:48:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.425 08:48:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:35.684 08:48:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.684 08:48:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:35.684 08:48:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.684 08:48:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:35.684 08:48:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.684 08:48:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:35.684 08:48:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:35.684 08:48:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:35.684 00:06:35.684 real 0m0.360s 00:06:35.684 user 0m0.226s 00:06:35.684 sys 0m0.037s 00:06:35.684 08:48:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:35.684 ************************************ 00:06:35.684 END TEST rpc_daemon_integrity 00:06:35.684 ************************************ 00:06:35.684 08:48:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:35.684 08:48:42 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:35.684 08:48:42 rpc -- rpc/rpc.sh@84 -- # killprocess 59550 00:06:35.684 08:48:42 rpc -- common/autotest_common.sh@950 -- # '[' -z 59550 ']' 00:06:35.684 08:48:42 rpc -- common/autotest_common.sh@954 -- # kill -0 59550 00:06:35.684 08:48:42 rpc -- common/autotest_common.sh@955 -- # uname 00:06:35.684 08:48:42 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:35.684 08:48:42 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59550 00:06:35.684 08:48:42 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:35.684 killing process with pid 59550 00:06:35.684 08:48:42 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:35.684 08:48:42 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59550' 00:06:35.684 08:48:42 rpc -- common/autotest_common.sh@969 -- # kill 59550 00:06:35.684 08:48:42 rpc -- common/autotest_common.sh@974 -- # wait 59550 00:06:38.215 00:06:38.215 real 0m5.022s 00:06:38.215 user 0m5.667s 00:06:38.215 sys 0m0.825s 00:06:38.215 08:48:44 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:38.215 ************************************ 00:06:38.215 END TEST rpc 00:06:38.215 ************************************ 00:06:38.215 08:48:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.215 08:48:44 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:38.215 08:48:44 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:38.215 08:48:44 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:38.215 08:48:44 -- common/autotest_common.sh@10 -- # set +x 00:06:38.215 ************************************ 00:06:38.215 START TEST skip_rpc 00:06:38.215 ************************************ 00:06:38.215 08:48:44 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:38.215 * Looking for test storage... 00:06:38.215 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:38.215 08:48:44 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:38.215 08:48:44 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:38.215 08:48:44 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:38.216 08:48:44 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:38.216 08:48:44 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:38.216 08:48:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.216 ************************************ 00:06:38.216 START TEST skip_rpc 00:06:38.216 ************************************ 00:06:38.216 08:48:44 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:06:38.216 08:48:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=59771 00:06:38.216 08:48:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:38.216 08:48:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:38.216 08:48:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:38.216 [2024-07-25 08:48:45.116473] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:38.216 [2024-07-25 08:48:45.116666] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59771 ] 00:06:38.216 [2024-07-25 08:48:45.293466] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.474 [2024-07-25 08:48:45.557157] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.732 [2024-07-25 08:48:45.771615] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:42.920 08:48:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:42.920 08:48:49 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:42.920 08:48:49 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:42.920 08:48:49 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:42.920 08:48:49 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:42.920 08:48:49 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:42.920 08:48:49 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:42.920 08:48:49 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:06:42.920 08:48:49 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.920 08:48:49 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.920 08:48:49 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:42.920 08:48:49 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:42.920 08:48:49 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:42.920 08:48:49 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:42.920 08:48:49 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:42.920 08:48:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:42.920 08:48:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 59771 00:06:42.920 08:48:49 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 59771 ']' 00:06:42.920 08:48:49 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 59771 00:06:42.920 08:48:49 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:06:42.920 08:48:50 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:42.920 08:48:50 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59771 00:06:42.920 08:48:50 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:42.920 killing process with pid 59771 00:06:42.920 08:48:50 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:42.920 08:48:50 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59771' 00:06:42.920 08:48:50 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 59771 00:06:42.920 08:48:50 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 59771 00:06:45.453 00:06:45.453 real 0m7.327s 00:06:45.453 user 0m6.731s 00:06:45.453 sys 0m0.480s 00:06:45.453 ************************************ 00:06:45.453 END TEST skip_rpc 00:06:45.453 ************************************ 00:06:45.453 08:48:52 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:45.453 08:48:52 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:45.453 08:48:52 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:45.453 08:48:52 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:45.453 08:48:52 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:45.453 08:48:52 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:45.453 ************************************ 00:06:45.453 START TEST skip_rpc_with_json 00:06:45.453 ************************************ 00:06:45.453 08:48:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:06:45.453 08:48:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:45.453 08:48:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=59875 00:06:45.453 08:48:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:45.453 08:48:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:45.453 08:48:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 59875 00:06:45.453 08:48:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 59875 ']' 00:06:45.453 08:48:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.453 08:48:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:45.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.453 08:48:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.453 08:48:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:45.453 08:48:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:45.453 [2024-07-25 08:48:52.492315] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:45.453 [2024-07-25 08:48:52.492521] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59875 ] 00:06:45.711 [2024-07-25 08:48:52.662374] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.969 [2024-07-25 08:48:52.917538] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.254 [2024-07-25 08:48:53.128397] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:46.820 08:48:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:46.820 08:48:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:06:46.820 08:48:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:46.820 08:48:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.820 08:48:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:46.820 [2024-07-25 08:48:53.738299] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:46.820 request: 00:06:46.820 { 00:06:46.820 "trtype": "tcp", 00:06:46.820 "method": "nvmf_get_transports", 00:06:46.820 "req_id": 1 00:06:46.820 } 00:06:46.820 Got JSON-RPC error response 00:06:46.820 response: 00:06:46.821 { 00:06:46.821 "code": -19, 00:06:46.821 "message": "No such device" 00:06:46.821 } 00:06:46.821 08:48:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:46.821 08:48:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:46.821 08:48:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.821 08:48:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:46.821 [2024-07-25 08:48:53.750418] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:46.821 08:48:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.821 08:48:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:46.821 08:48:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.821 08:48:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:46.821 08:48:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.821 08:48:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:46.821 { 00:06:46.821 "subsystems": [ 00:06:46.821 { 00:06:46.821 "subsystem": "vfio_user_target", 00:06:46.821 "config": null 00:06:46.821 }, 00:06:46.821 { 00:06:46.821 "subsystem": "keyring", 00:06:46.821 "config": [] 00:06:46.821 }, 00:06:46.821 { 00:06:46.821 "subsystem": "iobuf", 00:06:46.821 "config": [ 00:06:46.821 { 00:06:46.821 "method": "iobuf_set_options", 00:06:46.821 "params": { 00:06:46.821 "small_pool_count": 8192, 00:06:46.821 "large_pool_count": 1024, 00:06:46.821 "small_bufsize": 8192, 00:06:46.821 "large_bufsize": 135168 00:06:46.821 } 00:06:46.821 } 00:06:46.821 ] 00:06:46.821 }, 00:06:46.821 { 00:06:46.821 "subsystem": "sock", 00:06:46.821 "config": [ 00:06:46.821 { 00:06:46.821 "method": "sock_set_default_impl", 00:06:46.821 "params": { 00:06:46.821 "impl_name": "uring" 00:06:46.821 } 00:06:46.821 }, 00:06:46.821 { 00:06:46.821 "method": "sock_impl_set_options", 00:06:46.821 "params": { 00:06:46.821 "impl_name": "ssl", 00:06:46.821 "recv_buf_size": 4096, 00:06:46.821 "send_buf_size": 4096, 00:06:46.821 "enable_recv_pipe": true, 00:06:46.821 "enable_quickack": false, 00:06:46.821 "enable_placement_id": 0, 00:06:46.821 "enable_zerocopy_send_server": true, 00:06:46.821 "enable_zerocopy_send_client": false, 00:06:46.821 "zerocopy_threshold": 0, 00:06:46.821 "tls_version": 0, 00:06:46.821 "enable_ktls": false 00:06:46.821 } 00:06:46.821 }, 00:06:46.821 { 00:06:46.821 "method": "sock_impl_set_options", 00:06:46.821 "params": { 00:06:46.821 "impl_name": "posix", 00:06:46.821 "recv_buf_size": 2097152, 00:06:46.821 "send_buf_size": 2097152, 00:06:46.821 "enable_recv_pipe": true, 00:06:46.821 "enable_quickack": false, 00:06:46.821 "enable_placement_id": 0, 00:06:46.821 "enable_zerocopy_send_server": true, 00:06:46.821 "enable_zerocopy_send_client": false, 00:06:46.821 "zerocopy_threshold": 0, 00:06:46.821 "tls_version": 0, 00:06:46.821 "enable_ktls": false 00:06:46.821 } 00:06:46.821 }, 00:06:46.821 { 00:06:46.821 "method": "sock_impl_set_options", 00:06:46.821 "params": { 00:06:46.821 "impl_name": "uring", 00:06:46.821 "recv_buf_size": 2097152, 00:06:46.821 "send_buf_size": 2097152, 00:06:46.821 "enable_recv_pipe": true, 00:06:46.821 "enable_quickack": false, 00:06:46.821 "enable_placement_id": 0, 00:06:46.821 "enable_zerocopy_send_server": false, 00:06:46.821 "enable_zerocopy_send_client": false, 00:06:46.821 "zerocopy_threshold": 0, 00:06:46.821 "tls_version": 0, 00:06:46.821 "enable_ktls": false 00:06:46.821 } 00:06:46.821 } 00:06:46.821 ] 00:06:46.821 }, 00:06:46.821 { 00:06:46.821 "subsystem": "vmd", 00:06:46.821 "config": [] 00:06:46.821 }, 00:06:46.821 { 00:06:46.821 "subsystem": "accel", 00:06:46.821 "config": [ 00:06:46.821 { 00:06:46.821 "method": "accel_set_options", 00:06:46.821 "params": { 00:06:46.821 "small_cache_size": 128, 00:06:46.821 "large_cache_size": 16, 00:06:46.821 "task_count": 2048, 00:06:46.821 "sequence_count": 2048, 00:06:46.821 "buf_count": 2048 00:06:46.821 } 00:06:46.821 } 00:06:46.821 ] 00:06:46.821 }, 00:06:46.821 { 00:06:46.821 "subsystem": "bdev", 00:06:46.821 "config": [ 00:06:46.821 { 00:06:46.821 "method": "bdev_set_options", 00:06:46.821 "params": { 00:06:46.821 "bdev_io_pool_size": 65535, 00:06:46.821 "bdev_io_cache_size": 256, 00:06:46.821 "bdev_auto_examine": true, 00:06:46.821 "iobuf_small_cache_size": 128, 00:06:46.821 "iobuf_large_cache_size": 16 00:06:46.821 } 00:06:46.821 }, 00:06:46.821 { 00:06:46.821 "method": "bdev_raid_set_options", 00:06:46.821 "params": { 00:06:46.821 "process_window_size_kb": 1024, 00:06:46.821 "process_max_bandwidth_mb_sec": 0 00:06:46.821 } 00:06:46.821 }, 00:06:46.821 { 00:06:46.821 "method": "bdev_iscsi_set_options", 00:06:46.821 "params": { 00:06:46.821 "timeout_sec": 30 00:06:46.821 } 00:06:46.821 }, 00:06:46.821 { 00:06:46.821 "method": "bdev_nvme_set_options", 00:06:46.821 "params": { 00:06:46.821 "action_on_timeout": "none", 00:06:46.821 "timeout_us": 0, 00:06:46.821 "timeout_admin_us": 0, 00:06:46.821 "keep_alive_timeout_ms": 10000, 00:06:46.821 "arbitration_burst": 0, 00:06:46.821 "low_priority_weight": 0, 00:06:46.821 "medium_priority_weight": 0, 00:06:46.821 "high_priority_weight": 0, 00:06:46.821 "nvme_adminq_poll_period_us": 10000, 00:06:46.821 "nvme_ioq_poll_period_us": 0, 00:06:46.821 "io_queue_requests": 0, 00:06:46.821 "delay_cmd_submit": true, 00:06:46.821 "transport_retry_count": 4, 00:06:46.821 "bdev_retry_count": 3, 00:06:46.821 "transport_ack_timeout": 0, 00:06:46.821 "ctrlr_loss_timeout_sec": 0, 00:06:46.821 "reconnect_delay_sec": 0, 00:06:46.821 "fast_io_fail_timeout_sec": 0, 00:06:46.821 "disable_auto_failback": false, 00:06:46.821 "generate_uuids": false, 00:06:46.821 "transport_tos": 0, 00:06:46.821 "nvme_error_stat": false, 00:06:46.821 "rdma_srq_size": 0, 00:06:46.821 "io_path_stat": false, 00:06:46.821 "allow_accel_sequence": false, 00:06:46.821 "rdma_max_cq_size": 0, 00:06:46.821 "rdma_cm_event_timeout_ms": 0, 00:06:46.821 "dhchap_digests": [ 00:06:46.821 "sha256", 00:06:46.821 "sha384", 00:06:46.821 "sha512" 00:06:46.821 ], 00:06:46.821 "dhchap_dhgroups": [ 00:06:46.821 "null", 00:06:46.821 "ffdhe2048", 00:06:46.821 "ffdhe3072", 00:06:46.821 "ffdhe4096", 00:06:46.821 "ffdhe6144", 00:06:46.821 "ffdhe8192" 00:06:46.821 ] 00:06:46.821 } 00:06:46.821 }, 00:06:46.821 { 00:06:46.821 "method": "bdev_nvme_set_hotplug", 00:06:46.821 "params": { 00:06:46.821 "period_us": 100000, 00:06:46.821 "enable": false 00:06:46.821 } 00:06:46.821 }, 00:06:46.821 { 00:06:46.821 "method": "bdev_wait_for_examine" 00:06:46.821 } 00:06:46.821 ] 00:06:46.821 }, 00:06:46.821 { 00:06:46.821 "subsystem": "scsi", 00:06:46.821 "config": null 00:06:46.821 }, 00:06:46.821 { 00:06:46.821 "subsystem": "scheduler", 00:06:46.821 "config": [ 00:06:46.821 { 00:06:46.821 "method": "framework_set_scheduler", 00:06:46.821 "params": { 00:06:46.821 "name": "static" 00:06:46.821 } 00:06:46.821 } 00:06:46.821 ] 00:06:46.821 }, 00:06:46.821 { 00:06:46.821 "subsystem": "vhost_scsi", 00:06:46.821 "config": [] 00:06:46.821 }, 00:06:46.821 { 00:06:46.821 "subsystem": "vhost_blk", 00:06:46.821 "config": [] 00:06:46.821 }, 00:06:46.821 { 00:06:46.821 "subsystem": "ublk", 00:06:46.821 "config": [] 00:06:46.821 }, 00:06:46.821 { 00:06:46.821 "subsystem": "nbd", 00:06:46.821 "config": [] 00:06:46.821 }, 00:06:46.821 { 00:06:46.822 "subsystem": "nvmf", 00:06:46.822 "config": [ 00:06:46.822 { 00:06:46.822 "method": "nvmf_set_config", 00:06:46.822 "params": { 00:06:46.822 "discovery_filter": "match_any", 00:06:46.822 "admin_cmd_passthru": { 00:06:46.822 "identify_ctrlr": false 00:06:46.822 } 00:06:46.822 } 00:06:46.822 }, 00:06:46.822 { 00:06:46.822 "method": "nvmf_set_max_subsystems", 00:06:46.822 "params": { 00:06:46.822 "max_subsystems": 1024 00:06:46.822 } 00:06:46.822 }, 00:06:46.822 { 00:06:46.822 "method": "nvmf_set_crdt", 00:06:46.822 "params": { 00:06:46.822 "crdt1": 0, 00:06:46.822 "crdt2": 0, 00:06:46.822 "crdt3": 0 00:06:46.822 } 00:06:46.822 }, 00:06:46.822 { 00:06:46.822 "method": "nvmf_create_transport", 00:06:46.822 "params": { 00:06:46.822 "trtype": "TCP", 00:06:46.822 "max_queue_depth": 128, 00:06:46.822 "max_io_qpairs_per_ctrlr": 127, 00:06:46.822 "in_capsule_data_size": 4096, 00:06:46.822 "max_io_size": 131072, 00:06:46.822 "io_unit_size": 131072, 00:06:46.822 "max_aq_depth": 128, 00:06:46.822 "num_shared_buffers": 511, 00:06:46.822 "buf_cache_size": 4294967295, 00:06:46.822 "dif_insert_or_strip": false, 00:06:46.822 "zcopy": false, 00:06:46.822 "c2h_success": true, 00:06:46.822 "sock_priority": 0, 00:06:46.822 "abort_timeout_sec": 1, 00:06:46.822 "ack_timeout": 0, 00:06:46.822 "data_wr_pool_size": 0 00:06:46.822 } 00:06:46.822 } 00:06:46.822 ] 00:06:46.822 }, 00:06:46.822 { 00:06:46.822 "subsystem": "iscsi", 00:06:46.822 "config": [ 00:06:46.822 { 00:06:46.822 "method": "iscsi_set_options", 00:06:46.822 "params": { 00:06:46.822 "node_base": "iqn.2016-06.io.spdk", 00:06:46.822 "max_sessions": 128, 00:06:46.822 "max_connections_per_session": 2, 00:06:46.822 "max_queue_depth": 64, 00:06:46.822 "default_time2wait": 2, 00:06:46.822 "default_time2retain": 20, 00:06:46.822 "first_burst_length": 8192, 00:06:46.822 "immediate_data": true, 00:06:46.822 "allow_duplicated_isid": false, 00:06:46.822 "error_recovery_level": 0, 00:06:46.822 "nop_timeout": 60, 00:06:46.822 "nop_in_interval": 30, 00:06:46.822 "disable_chap": false, 00:06:46.822 "require_chap": false, 00:06:46.822 "mutual_chap": false, 00:06:46.822 "chap_group": 0, 00:06:46.822 "max_large_datain_per_connection": 64, 00:06:46.822 "max_r2t_per_connection": 4, 00:06:46.822 "pdu_pool_size": 36864, 00:06:46.822 "immediate_data_pool_size": 16384, 00:06:46.822 "data_out_pool_size": 2048 00:06:46.822 } 00:06:46.822 } 00:06:46.822 ] 00:06:46.822 } 00:06:46.822 ] 00:06:46.822 } 00:06:46.822 08:48:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:46.822 08:48:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 59875 00:06:46.822 08:48:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 59875 ']' 00:06:46.822 08:48:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 59875 00:06:46.822 08:48:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:47.081 08:48:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:47.081 08:48:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59875 00:06:47.081 08:48:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:47.081 killing process with pid 59875 00:06:47.081 08:48:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:47.081 08:48:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59875' 00:06:47.081 08:48:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 59875 00:06:47.081 08:48:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 59875 00:06:49.613 08:48:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=59930 00:06:49.613 08:48:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:49.613 08:48:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:54.878 08:49:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 59930 00:06:54.878 08:49:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 59930 ']' 00:06:54.878 08:49:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 59930 00:06:54.878 08:49:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:54.878 08:49:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:54.878 08:49:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59930 00:06:54.878 08:49:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:54.878 killing process with pid 59930 00:06:54.878 08:49:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:54.878 08:49:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59930' 00:06:54.878 08:49:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 59930 00:06:54.878 08:49:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 59930 00:06:56.777 08:49:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:56.777 08:49:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:56.777 00:06:56.777 real 0m11.045s 00:06:56.777 user 0m10.448s 00:06:56.777 sys 0m1.004s 00:06:56.777 ************************************ 00:06:56.777 END TEST skip_rpc_with_json 00:06:56.777 ************************************ 00:06:56.777 08:49:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:56.777 08:49:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:56.777 08:49:03 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:56.777 08:49:03 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:56.777 08:49:03 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:56.777 08:49:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.777 ************************************ 00:06:56.777 START TEST skip_rpc_with_delay 00:06:56.777 ************************************ 00:06:56.777 08:49:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:06:56.777 08:49:03 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:56.777 08:49:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:06:56.777 08:49:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:56.777 08:49:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:56.777 08:49:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:56.777 08:49:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:56.777 08:49:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:56.777 08:49:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:56.777 08:49:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:56.777 08:49:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:56.777 08:49:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:56.777 08:49:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:56.777 [2024-07-25 08:49:03.585982] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:56.777 [2024-07-25 08:49:03.586185] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:56.777 08:49:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:06:56.777 08:49:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:56.777 08:49:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:56.777 08:49:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:56.777 00:06:56.777 real 0m0.203s 00:06:56.777 user 0m0.115s 00:06:56.777 sys 0m0.085s 00:06:56.777 08:49:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:56.777 ************************************ 00:06:56.777 END TEST skip_rpc_with_delay 00:06:56.777 ************************************ 00:06:56.777 08:49:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:56.777 08:49:03 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:56.777 08:49:03 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:56.777 08:49:03 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:56.777 08:49:03 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:56.777 08:49:03 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:56.777 08:49:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.777 ************************************ 00:06:56.777 START TEST exit_on_failed_rpc_init 00:06:56.777 ************************************ 00:06:56.777 08:49:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:06:56.777 08:49:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=60059 00:06:56.777 08:49:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 60059 00:06:56.777 08:49:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:56.777 08:49:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 60059 ']' 00:06:56.777 08:49:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.777 08:49:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:56.777 08:49:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.777 08:49:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:56.777 08:49:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:56.777 [2024-07-25 08:49:03.841982] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:56.777 [2024-07-25 08:49:03.842186] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60059 ] 00:06:57.034 [2024-07-25 08:49:04.020129] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.292 [2024-07-25 08:49:04.308701] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.550 [2024-07-25 08:49:04.541441] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:58.115 08:49:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:58.115 08:49:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:06:58.115 08:49:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:58.115 08:49:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:58.115 08:49:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:06:58.115 08:49:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:58.115 08:49:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:58.115 08:49:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:58.115 08:49:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:58.115 08:49:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:58.115 08:49:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:58.115 08:49:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:58.115 08:49:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:58.115 08:49:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:58.115 08:49:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:58.374 [2024-07-25 08:49:05.328391] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:58.374 [2024-07-25 08:49:05.328585] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60083 ] 00:06:58.632 [2024-07-25 08:49:05.505099] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.889 [2024-07-25 08:49:05.773856] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:58.889 [2024-07-25 08:49:05.773988] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:58.889 [2024-07-25 08:49:05.774015] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:58.889 [2024-07-25 08:49:05.774040] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:59.149 08:49:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:06:59.149 08:49:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:59.149 08:49:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:06:59.149 08:49:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:06:59.149 08:49:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:06:59.149 08:49:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:59.149 08:49:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:59.149 08:49:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 60059 00:06:59.149 08:49:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 60059 ']' 00:06:59.149 08:49:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 60059 00:06:59.149 08:49:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:06:59.149 08:49:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:59.149 08:49:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60059 00:06:59.149 killing process with pid 60059 00:06:59.149 08:49:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:59.149 08:49:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:59.149 08:49:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60059' 00:06:59.149 08:49:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 60059 00:06:59.149 08:49:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 60059 00:07:01.676 00:07:01.676 real 0m4.683s 00:07:01.676 user 0m5.262s 00:07:01.676 sys 0m0.738s 00:07:01.676 ************************************ 00:07:01.676 END TEST exit_on_failed_rpc_init 00:07:01.676 ************************************ 00:07:01.676 08:49:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:01.676 08:49:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:01.676 08:49:08 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:01.676 ************************************ 00:07:01.676 END TEST skip_rpc 00:07:01.676 ************************************ 00:07:01.676 00:07:01.676 real 0m23.550s 00:07:01.676 user 0m22.648s 00:07:01.676 sys 0m2.501s 00:07:01.676 08:49:08 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:01.676 08:49:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.676 08:49:08 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:07:01.676 08:49:08 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:01.676 08:49:08 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:01.676 08:49:08 -- common/autotest_common.sh@10 -- # set +x 00:07:01.676 ************************************ 00:07:01.676 START TEST rpc_client 00:07:01.676 ************************************ 00:07:01.676 08:49:08 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:07:01.676 * Looking for test storage... 00:07:01.676 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:07:01.676 08:49:08 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:07:01.676 OK 00:07:01.676 08:49:08 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:07:01.676 00:07:01.676 real 0m0.157s 00:07:01.676 user 0m0.055s 00:07:01.676 sys 0m0.101s 00:07:01.676 08:49:08 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:01.676 08:49:08 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:07:01.676 ************************************ 00:07:01.676 END TEST rpc_client 00:07:01.676 ************************************ 00:07:01.676 08:49:08 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:07:01.676 08:49:08 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:01.676 08:49:08 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:01.676 08:49:08 -- common/autotest_common.sh@10 -- # set +x 00:07:01.676 ************************************ 00:07:01.676 START TEST json_config 00:07:01.676 ************************************ 00:07:01.676 08:49:08 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:07:01.676 08:49:08 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:01.676 08:49:08 json_config -- nvmf/common.sh@7 -- # uname -s 00:07:01.676 08:49:08 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:01.676 08:49:08 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:01.676 08:49:08 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:01.676 08:49:08 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:01.676 08:49:08 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:01.676 08:49:08 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:01.676 08:49:08 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:01.676 08:49:08 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:01.677 08:49:08 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:01.677 08:49:08 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:01.677 08:49:08 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:07:01.677 08:49:08 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:07:01.677 08:49:08 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:01.677 08:49:08 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:01.677 08:49:08 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:01.677 08:49:08 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:01.677 08:49:08 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:01.677 08:49:08 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:01.677 08:49:08 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:01.677 08:49:08 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:01.677 08:49:08 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.677 08:49:08 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.677 08:49:08 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.677 08:49:08 json_config -- paths/export.sh@5 -- # export PATH 00:07:01.677 08:49:08 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.677 08:49:08 json_config -- nvmf/common.sh@47 -- # : 0 00:07:01.677 08:49:08 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:01.677 08:49:08 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:01.677 08:49:08 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:01.677 08:49:08 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:01.677 08:49:08 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:01.677 08:49:08 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:01.677 08:49:08 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:01.677 08:49:08 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:01.677 08:49:08 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:07:01.677 08:49:08 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:07:01.677 08:49:08 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:07:01.677 08:49:08 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:07:01.677 08:49:08 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:07:01.677 08:49:08 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:07:01.677 08:49:08 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:07:01.677 08:49:08 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:07:01.677 08:49:08 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:07:01.677 08:49:08 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:07:01.677 08:49:08 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:07:01.677 08:49:08 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:07:01.677 08:49:08 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:07:01.677 08:49:08 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:07:01.677 08:49:08 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:01.677 INFO: JSON configuration test init 00:07:01.677 08:49:08 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:07:01.677 08:49:08 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:07:01.677 08:49:08 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:07:01.677 08:49:08 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:01.677 08:49:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:01.677 08:49:08 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:07:01.677 08:49:08 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:01.677 08:49:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:01.935 08:49:08 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:07:01.935 08:49:08 json_config -- json_config/common.sh@9 -- # local app=target 00:07:01.935 08:49:08 json_config -- json_config/common.sh@10 -- # shift 00:07:01.935 08:49:08 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:01.935 08:49:08 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:01.935 08:49:08 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:07:01.935 08:49:08 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:01.935 08:49:08 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:01.935 08:49:08 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=60231 00:07:01.935 Waiting for target to run... 00:07:01.935 08:49:08 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:01.935 08:49:08 json_config -- json_config/common.sh@25 -- # waitforlisten 60231 /var/tmp/spdk_tgt.sock 00:07:01.935 08:49:08 json_config -- common/autotest_common.sh@831 -- # '[' -z 60231 ']' 00:07:01.935 08:49:08 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:01.935 08:49:08 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:07:01.935 08:49:08 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:01.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:01.935 08:49:08 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:01.935 08:49:08 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:01.935 08:49:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:01.935 [2024-07-25 08:49:08.927538] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:01.935 [2024-07-25 08:49:08.927742] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60231 ] 00:07:02.501 [2024-07-25 08:49:09.394330] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.760 [2024-07-25 08:49:09.641683] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.760 08:49:09 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:02.760 08:49:09 json_config -- common/autotest_common.sh@864 -- # return 0 00:07:02.760 00:07:02.760 08:49:09 json_config -- json_config/common.sh@26 -- # echo '' 00:07:02.760 08:49:09 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:07:02.760 08:49:09 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:07:02.760 08:49:09 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:02.760 08:49:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:02.760 08:49:09 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:07:02.760 08:49:09 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:07:02.760 08:49:09 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:02.760 08:49:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:02.760 08:49:09 json_config -- json_config/json_config.sh@277 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:07:02.760 08:49:09 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:07:02.760 08:49:09 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:07:03.328 [2024-07-25 08:49:10.306043] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:03.895 08:49:10 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:07:03.895 08:49:10 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:07:03.895 08:49:10 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:03.895 08:49:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:03.895 08:49:10 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:07:03.895 08:49:10 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:07:03.895 08:49:10 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:07:03.895 08:49:10 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:07:03.895 08:49:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:07:03.895 08:49:10 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:07:04.154 08:49:11 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:07:04.154 08:49:11 json_config -- json_config/json_config.sh@48 -- # local get_types 00:07:04.154 08:49:11 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:07:04.154 08:49:11 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:07:04.154 08:49:11 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:07:04.154 08:49:11 json_config -- json_config/json_config.sh@51 -- # sort 00:07:04.154 08:49:11 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:07:04.154 08:49:11 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:07:04.154 08:49:11 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:07:04.154 08:49:11 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:07:04.154 08:49:11 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:04.154 08:49:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:04.154 08:49:11 json_config -- json_config/json_config.sh@59 -- # return 0 00:07:04.154 08:49:11 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:07:04.154 08:49:11 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:07:04.154 08:49:11 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:07:04.154 08:49:11 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:07:04.154 08:49:11 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:07:04.154 08:49:11 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:07:04.154 08:49:11 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:04.154 08:49:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:04.154 08:49:11 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:07:04.154 08:49:11 json_config -- json_config/json_config.sh@237 -- # [[ tcp == \r\d\m\a ]] 00:07:04.154 08:49:11 json_config -- json_config/json_config.sh@241 -- # [[ -z 127.0.0.1 ]] 00:07:04.154 08:49:11 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:07:04.154 08:49:11 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:07:04.723 MallocForNvmf0 00:07:04.723 08:49:11 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:07:04.723 08:49:11 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:07:04.982 MallocForNvmf1 00:07:04.982 08:49:11 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:07:04.982 08:49:11 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:07:05.241 [2024-07-25 08:49:12.105194] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:05.241 08:49:12 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:05.241 08:49:12 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:05.500 08:49:12 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:07:05.500 08:49:12 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:07:05.500 08:49:12 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:07:05.500 08:49:12 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:07:05.758 08:49:12 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:07:05.758 08:49:12 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:07:06.017 [2024-07-25 08:49:13.014069] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:07:06.017 08:49:13 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:07:06.017 08:49:13 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:06.017 08:49:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:06.017 08:49:13 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:07:06.017 08:49:13 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:06.017 08:49:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:06.017 08:49:13 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:07:06.017 08:49:13 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:06.017 08:49:13 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:06.275 MallocBdevForConfigChangeCheck 00:07:06.275 08:49:13 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:07:06.275 08:49:13 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:06.275 08:49:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:06.275 08:49:13 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:07:06.275 08:49:13 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:06.843 INFO: shutting down applications... 00:07:06.843 08:49:13 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:07:06.843 08:49:13 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:07:06.843 08:49:13 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:07:06.843 08:49:13 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:07:06.843 08:49:13 json_config -- json_config/json_config.sh@337 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:07:07.101 Calling clear_iscsi_subsystem 00:07:07.101 Calling clear_nvmf_subsystem 00:07:07.101 Calling clear_nbd_subsystem 00:07:07.101 Calling clear_ublk_subsystem 00:07:07.101 Calling clear_vhost_blk_subsystem 00:07:07.101 Calling clear_vhost_scsi_subsystem 00:07:07.101 Calling clear_bdev_subsystem 00:07:07.101 08:49:14 json_config -- json_config/json_config.sh@341 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:07:07.101 08:49:14 json_config -- json_config/json_config.sh@347 -- # count=100 00:07:07.101 08:49:14 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:07:07.101 08:49:14 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:07.101 08:49:14 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:07:07.101 08:49:14 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:07:07.669 08:49:14 json_config -- json_config/json_config.sh@349 -- # break 00:07:07.669 08:49:14 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:07:07.669 08:49:14 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:07:07.669 08:49:14 json_config -- json_config/common.sh@31 -- # local app=target 00:07:07.669 08:49:14 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:07.669 08:49:14 json_config -- json_config/common.sh@35 -- # [[ -n 60231 ]] 00:07:07.669 08:49:14 json_config -- json_config/common.sh@38 -- # kill -SIGINT 60231 00:07:07.669 08:49:14 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:07.669 08:49:14 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:07.669 08:49:14 json_config -- json_config/common.sh@41 -- # kill -0 60231 00:07:07.669 08:49:14 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:07:07.928 08:49:15 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:07:07.928 08:49:15 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:07.928 08:49:15 json_config -- json_config/common.sh@41 -- # kill -0 60231 00:07:07.928 08:49:15 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:07:08.496 08:49:15 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:07:08.496 08:49:15 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:08.496 08:49:15 json_config -- json_config/common.sh@41 -- # kill -0 60231 00:07:08.496 08:49:15 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:08.496 08:49:15 json_config -- json_config/common.sh@43 -- # break 00:07:08.496 08:49:15 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:08.496 08:49:15 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:08.496 SPDK target shutdown done 00:07:08.496 INFO: relaunching applications... 00:07:08.496 08:49:15 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:07:08.496 08:49:15 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:08.496 08:49:15 json_config -- json_config/common.sh@9 -- # local app=target 00:07:08.496 08:49:15 json_config -- json_config/common.sh@10 -- # shift 00:07:08.496 08:49:15 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:08.496 08:49:15 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:08.496 08:49:15 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:07:08.496 08:49:15 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:08.496 08:49:15 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:08.496 08:49:15 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=60440 00:07:08.496 Waiting for target to run... 00:07:08.496 08:49:15 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:08.496 08:49:15 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:08.496 08:49:15 json_config -- json_config/common.sh@25 -- # waitforlisten 60440 /var/tmp/spdk_tgt.sock 00:07:08.496 08:49:15 json_config -- common/autotest_common.sh@831 -- # '[' -z 60440 ']' 00:07:08.496 08:49:15 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:08.496 08:49:15 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:08.496 08:49:15 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:08.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:08.496 08:49:15 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:08.496 08:49:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:08.755 [2024-07-25 08:49:15.651850] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:08.755 [2024-07-25 08:49:15.652086] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60440 ] 00:07:09.323 [2024-07-25 08:49:16.137292] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.323 [2024-07-25 08:49:16.344030] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.582 [2024-07-25 08:49:16.619678] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:10.148 [2024-07-25 08:49:17.232274] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:10.407 [2024-07-25 08:49:17.264469] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:07:10.407 08:49:17 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:10.407 00:07:10.407 08:49:17 json_config -- common/autotest_common.sh@864 -- # return 0 00:07:10.407 08:49:17 json_config -- json_config/common.sh@26 -- # echo '' 00:07:10.407 08:49:17 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:07:10.407 INFO: Checking if target configuration is the same... 00:07:10.407 08:49:17 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:07:10.407 08:49:17 json_config -- json_config/json_config.sh@382 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:10.407 08:49:17 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:07:10.407 08:49:17 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:10.407 + '[' 2 -ne 2 ']' 00:07:10.407 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:07:10.407 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:07:10.407 + rootdir=/home/vagrant/spdk_repo/spdk 00:07:10.407 +++ basename /dev/fd/62 00:07:10.407 ++ mktemp /tmp/62.XXX 00:07:10.407 + tmp_file_1=/tmp/62.aVa 00:07:10.407 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:10.407 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:10.407 + tmp_file_2=/tmp/spdk_tgt_config.json.FAQ 00:07:10.407 + ret=0 00:07:10.407 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:10.666 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:10.666 + diff -u /tmp/62.aVa /tmp/spdk_tgt_config.json.FAQ 00:07:10.666 INFO: JSON config files are the same 00:07:10.666 + echo 'INFO: JSON config files are the same' 00:07:10.666 + rm /tmp/62.aVa /tmp/spdk_tgt_config.json.FAQ 00:07:10.666 + exit 0 00:07:10.666 08:49:17 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:07:10.666 INFO: changing configuration and checking if this can be detected... 00:07:10.666 08:49:17 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:07:10.666 08:49:17 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:10.666 08:49:17 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:11.233 08:49:18 json_config -- json_config/json_config.sh@391 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:11.233 08:49:18 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:07:11.233 08:49:18 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:11.233 + '[' 2 -ne 2 ']' 00:07:11.233 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:07:11.233 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:07:11.233 + rootdir=/home/vagrant/spdk_repo/spdk 00:07:11.233 +++ basename /dev/fd/62 00:07:11.233 ++ mktemp /tmp/62.XXX 00:07:11.233 + tmp_file_1=/tmp/62.Joj 00:07:11.233 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:11.233 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:11.233 + tmp_file_2=/tmp/spdk_tgt_config.json.lOr 00:07:11.233 + ret=0 00:07:11.233 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:11.492 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:11.492 + diff -u /tmp/62.Joj /tmp/spdk_tgt_config.json.lOr 00:07:11.492 + ret=1 00:07:11.492 + echo '=== Start of file: /tmp/62.Joj ===' 00:07:11.492 + cat /tmp/62.Joj 00:07:11.492 + echo '=== End of file: /tmp/62.Joj ===' 00:07:11.492 + echo '' 00:07:11.492 + echo '=== Start of file: /tmp/spdk_tgt_config.json.lOr ===' 00:07:11.492 + cat /tmp/spdk_tgt_config.json.lOr 00:07:11.492 + echo '=== End of file: /tmp/spdk_tgt_config.json.lOr ===' 00:07:11.492 + echo '' 00:07:11.492 + rm /tmp/62.Joj /tmp/spdk_tgt_config.json.lOr 00:07:11.492 + exit 1 00:07:11.492 INFO: configuration change detected. 00:07:11.492 08:49:18 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:07:11.492 08:49:18 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:07:11.492 08:49:18 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:07:11.492 08:49:18 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:11.492 08:49:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:11.492 08:49:18 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:07:11.492 08:49:18 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:07:11.492 08:49:18 json_config -- json_config/json_config.sh@321 -- # [[ -n 60440 ]] 00:07:11.492 08:49:18 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:07:11.492 08:49:18 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:07:11.492 08:49:18 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:11.492 08:49:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:11.492 08:49:18 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:07:11.492 08:49:18 json_config -- json_config/json_config.sh@197 -- # uname -s 00:07:11.492 08:49:18 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:07:11.492 08:49:18 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:07:11.492 08:49:18 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:07:11.492 08:49:18 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:07:11.492 08:49:18 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:11.492 08:49:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:11.492 08:49:18 json_config -- json_config/json_config.sh@327 -- # killprocess 60440 00:07:11.492 08:49:18 json_config -- common/autotest_common.sh@950 -- # '[' -z 60440 ']' 00:07:11.492 08:49:18 json_config -- common/autotest_common.sh@954 -- # kill -0 60440 00:07:11.492 08:49:18 json_config -- common/autotest_common.sh@955 -- # uname 00:07:11.492 08:49:18 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:11.492 08:49:18 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60440 00:07:11.492 08:49:18 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:11.492 killing process with pid 60440 00:07:11.492 08:49:18 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:11.492 08:49:18 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60440' 00:07:11.492 08:49:18 json_config -- common/autotest_common.sh@969 -- # kill 60440 00:07:11.492 08:49:18 json_config -- common/autotest_common.sh@974 -- # wait 60440 00:07:12.869 08:49:19 json_config -- json_config/json_config.sh@330 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:12.869 08:49:19 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:07:12.869 08:49:19 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:12.869 08:49:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:12.869 08:49:19 json_config -- json_config/json_config.sh@332 -- # return 0 00:07:12.869 INFO: Success 00:07:12.869 08:49:19 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:07:12.869 00:07:12.869 real 0m10.959s 00:07:12.869 user 0m14.361s 00:07:12.869 sys 0m2.148s 00:07:12.869 08:49:19 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:12.869 08:49:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:12.869 ************************************ 00:07:12.869 END TEST json_config 00:07:12.869 ************************************ 00:07:12.869 08:49:19 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:12.869 08:49:19 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:12.869 08:49:19 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:12.869 08:49:19 -- common/autotest_common.sh@10 -- # set +x 00:07:12.869 ************************************ 00:07:12.869 START TEST json_config_extra_key 00:07:12.869 ************************************ 00:07:12.869 08:49:19 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:12.869 08:49:19 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:12.869 08:49:19 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:07:12.869 08:49:19 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:12.869 08:49:19 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:12.869 08:49:19 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:12.869 08:49:19 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:12.869 08:49:19 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:12.869 08:49:19 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:12.869 08:49:19 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:12.869 08:49:19 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:12.869 08:49:19 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:12.869 08:49:19 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:12.869 08:49:19 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:07:12.869 08:49:19 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:07:12.869 08:49:19 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:12.869 08:49:19 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:12.869 08:49:19 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:12.869 08:49:19 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:12.869 08:49:19 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:12.869 08:49:19 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:12.869 08:49:19 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:12.869 08:49:19 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:12.869 08:49:19 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.870 08:49:19 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.870 08:49:19 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.870 08:49:19 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:07:12.870 08:49:19 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.870 08:49:19 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:07:12.870 08:49:19 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:12.870 08:49:19 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:12.870 08:49:19 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:12.870 08:49:19 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:12.870 08:49:19 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:12.870 08:49:19 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:12.870 08:49:19 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:12.870 08:49:19 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:12.870 08:49:19 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:07:12.870 08:49:19 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:07:12.870 08:49:19 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:07:12.870 08:49:19 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:07:12.870 08:49:19 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:07:12.870 08:49:19 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:07:12.870 08:49:19 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:07:12.870 08:49:19 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:07:12.870 08:49:19 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:07:12.870 08:49:19 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:12.870 INFO: launching applications... 00:07:12.870 08:49:19 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:07:12.870 08:49:19 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:12.870 08:49:19 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:07:12.870 08:49:19 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:07:12.870 08:49:19 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:12.870 08:49:19 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:12.870 08:49:19 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:07:12.870 08:49:19 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:12.870 08:49:19 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:12.870 Waiting for target to run... 00:07:12.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:12.870 08:49:19 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=60598 00:07:12.870 08:49:19 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:12.870 08:49:19 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 60598 /var/tmp/spdk_tgt.sock 00:07:12.870 08:49:19 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:12.870 08:49:19 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 60598 ']' 00:07:12.870 08:49:19 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:12.870 08:49:19 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:12.870 08:49:19 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:12.870 08:49:19 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:12.870 08:49:19 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:12.870 [2024-07-25 08:49:19.928266] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:12.870 [2024-07-25 08:49:19.928468] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60598 ] 00:07:13.438 [2024-07-25 08:49:20.420353] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.698 [2024-07-25 08:49:20.683475] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.959 [2024-07-25 08:49:20.846582] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:14.526 00:07:14.526 INFO: shutting down applications... 00:07:14.526 08:49:21 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:14.526 08:49:21 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:07:14.526 08:49:21 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:07:14.526 08:49:21 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:07:14.526 08:49:21 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:07:14.526 08:49:21 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:07:14.526 08:49:21 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:14.526 08:49:21 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 60598 ]] 00:07:14.526 08:49:21 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 60598 00:07:14.526 08:49:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:14.526 08:49:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:14.526 08:49:21 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 60598 00:07:14.526 08:49:21 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:14.785 08:49:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:14.785 08:49:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:14.785 08:49:21 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 60598 00:07:14.785 08:49:21 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:15.351 08:49:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:15.351 08:49:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:15.351 08:49:22 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 60598 00:07:15.351 08:49:22 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:15.919 08:49:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:15.919 08:49:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:15.919 08:49:22 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 60598 00:07:15.919 08:49:22 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:16.485 08:49:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:16.485 08:49:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:16.485 08:49:23 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 60598 00:07:16.485 08:49:23 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:17.052 08:49:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:17.052 08:49:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:17.052 08:49:23 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 60598 00:07:17.052 08:49:23 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:17.310 08:49:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:17.311 08:49:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:17.311 08:49:24 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 60598 00:07:17.311 SPDK target shutdown done 00:07:17.311 Success 00:07:17.311 08:49:24 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:17.311 08:49:24 json_config_extra_key -- json_config/common.sh@43 -- # break 00:07:17.311 08:49:24 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:17.311 08:49:24 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:17.311 08:49:24 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:07:17.311 ************************************ 00:07:17.311 END TEST json_config_extra_key 00:07:17.311 ************************************ 00:07:17.311 00:07:17.311 real 0m4.673s 00:07:17.311 user 0m3.957s 00:07:17.311 sys 0m0.668s 00:07:17.311 08:49:24 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:17.311 08:49:24 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:17.311 08:49:24 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:17.311 08:49:24 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:17.311 08:49:24 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:17.311 08:49:24 -- common/autotest_common.sh@10 -- # set +x 00:07:17.569 ************************************ 00:07:17.569 START TEST alias_rpc 00:07:17.569 ************************************ 00:07:17.569 08:49:24 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:17.569 * Looking for test storage... 00:07:17.569 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:07:17.569 08:49:24 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:17.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.569 08:49:24 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=60707 00:07:17.569 08:49:24 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:17.569 08:49:24 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 60707 00:07:17.569 08:49:24 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 60707 ']' 00:07:17.569 08:49:24 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.569 08:49:24 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:17.569 08:49:24 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.569 08:49:24 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:17.569 08:49:24 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.569 [2024-07-25 08:49:24.646232] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:17.569 [2024-07-25 08:49:24.646423] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60707 ] 00:07:17.827 [2024-07-25 08:49:24.821119] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.085 [2024-07-25 08:49:25.092673] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.344 [2024-07-25 08:49:25.294715] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:18.948 08:49:25 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:18.948 08:49:25 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:18.948 08:49:25 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:07:19.207 08:49:26 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 60707 00:07:19.207 08:49:26 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 60707 ']' 00:07:19.207 08:49:26 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 60707 00:07:19.207 08:49:26 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:07:19.207 08:49:26 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:19.207 08:49:26 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60707 00:07:19.207 killing process with pid 60707 00:07:19.207 08:49:26 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:19.207 08:49:26 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:19.207 08:49:26 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60707' 00:07:19.207 08:49:26 alias_rpc -- common/autotest_common.sh@969 -- # kill 60707 00:07:19.207 08:49:26 alias_rpc -- common/autotest_common.sh@974 -- # wait 60707 00:07:21.737 ************************************ 00:07:21.737 END TEST alias_rpc 00:07:21.737 ************************************ 00:07:21.737 00:07:21.737 real 0m4.174s 00:07:21.737 user 0m4.294s 00:07:21.737 sys 0m0.608s 00:07:21.737 08:49:28 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:21.737 08:49:28 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.737 08:49:28 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:07:21.737 08:49:28 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:07:21.737 08:49:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:21.737 08:49:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:21.737 08:49:28 -- common/autotest_common.sh@10 -- # set +x 00:07:21.737 ************************************ 00:07:21.737 START TEST spdkcli_tcp 00:07:21.737 ************************************ 00:07:21.737 08:49:28 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:07:21.737 * Looking for test storage... 00:07:21.737 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:07:21.737 08:49:28 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:07:21.737 08:49:28 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:07:21.737 08:49:28 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:07:21.737 08:49:28 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:07:21.737 08:49:28 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:07:21.737 08:49:28 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:07:21.737 08:49:28 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:07:21.737 08:49:28 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:21.737 08:49:28 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:21.737 08:49:28 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=60806 00:07:21.737 08:49:28 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:07:21.737 08:49:28 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 60806 00:07:21.737 08:49:28 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 60806 ']' 00:07:21.737 08:49:28 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.737 08:49:28 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:21.737 08:49:28 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.737 08:49:28 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:21.737 08:49:28 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:21.996 [2024-07-25 08:49:28.880763] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:21.996 [2024-07-25 08:49:28.880948] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60806 ] 00:07:21.996 [2024-07-25 08:49:29.049254] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:22.254 [2024-07-25 08:49:29.346667] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.254 [2024-07-25 08:49:29.346669] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:22.513 [2024-07-25 08:49:29.568715] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:23.079 08:49:30 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:23.079 08:49:30 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:07:23.079 08:49:30 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=60823 00:07:23.079 08:49:30 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:07:23.079 08:49:30 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:07:23.647 [ 00:07:23.647 "bdev_malloc_delete", 00:07:23.647 "bdev_malloc_create", 00:07:23.647 "bdev_null_resize", 00:07:23.647 "bdev_null_delete", 00:07:23.647 "bdev_null_create", 00:07:23.647 "bdev_nvme_cuse_unregister", 00:07:23.647 "bdev_nvme_cuse_register", 00:07:23.647 "bdev_opal_new_user", 00:07:23.647 "bdev_opal_set_lock_state", 00:07:23.647 "bdev_opal_delete", 00:07:23.647 "bdev_opal_get_info", 00:07:23.647 "bdev_opal_create", 00:07:23.647 "bdev_nvme_opal_revert", 00:07:23.647 "bdev_nvme_opal_init", 00:07:23.647 "bdev_nvme_send_cmd", 00:07:23.647 "bdev_nvme_get_path_iostat", 00:07:23.647 "bdev_nvme_get_mdns_discovery_info", 00:07:23.647 "bdev_nvme_stop_mdns_discovery", 00:07:23.647 "bdev_nvme_start_mdns_discovery", 00:07:23.647 "bdev_nvme_set_multipath_policy", 00:07:23.647 "bdev_nvme_set_preferred_path", 00:07:23.647 "bdev_nvme_get_io_paths", 00:07:23.647 "bdev_nvme_remove_error_injection", 00:07:23.647 "bdev_nvme_add_error_injection", 00:07:23.647 "bdev_nvme_get_discovery_info", 00:07:23.647 "bdev_nvme_stop_discovery", 00:07:23.647 "bdev_nvme_start_discovery", 00:07:23.647 "bdev_nvme_get_controller_health_info", 00:07:23.647 "bdev_nvme_disable_controller", 00:07:23.647 "bdev_nvme_enable_controller", 00:07:23.647 "bdev_nvme_reset_controller", 00:07:23.647 "bdev_nvme_get_transport_statistics", 00:07:23.647 "bdev_nvme_apply_firmware", 00:07:23.647 "bdev_nvme_detach_controller", 00:07:23.647 "bdev_nvme_get_controllers", 00:07:23.647 "bdev_nvme_attach_controller", 00:07:23.647 "bdev_nvme_set_hotplug", 00:07:23.647 "bdev_nvme_set_options", 00:07:23.647 "bdev_passthru_delete", 00:07:23.647 "bdev_passthru_create", 00:07:23.647 "bdev_lvol_set_parent_bdev", 00:07:23.647 "bdev_lvol_set_parent", 00:07:23.647 "bdev_lvol_check_shallow_copy", 00:07:23.647 "bdev_lvol_start_shallow_copy", 00:07:23.647 "bdev_lvol_grow_lvstore", 00:07:23.647 "bdev_lvol_get_lvols", 00:07:23.647 "bdev_lvol_get_lvstores", 00:07:23.647 "bdev_lvol_delete", 00:07:23.647 "bdev_lvol_set_read_only", 00:07:23.647 "bdev_lvol_resize", 00:07:23.647 "bdev_lvol_decouple_parent", 00:07:23.647 "bdev_lvol_inflate", 00:07:23.647 "bdev_lvol_rename", 00:07:23.647 "bdev_lvol_clone_bdev", 00:07:23.647 "bdev_lvol_clone", 00:07:23.647 "bdev_lvol_snapshot", 00:07:23.647 "bdev_lvol_create", 00:07:23.647 "bdev_lvol_delete_lvstore", 00:07:23.647 "bdev_lvol_rename_lvstore", 00:07:23.647 "bdev_lvol_create_lvstore", 00:07:23.647 "bdev_raid_set_options", 00:07:23.647 "bdev_raid_remove_base_bdev", 00:07:23.647 "bdev_raid_add_base_bdev", 00:07:23.647 "bdev_raid_delete", 00:07:23.647 "bdev_raid_create", 00:07:23.647 "bdev_raid_get_bdevs", 00:07:23.647 "bdev_error_inject_error", 00:07:23.647 "bdev_error_delete", 00:07:23.647 "bdev_error_create", 00:07:23.647 "bdev_split_delete", 00:07:23.647 "bdev_split_create", 00:07:23.647 "bdev_delay_delete", 00:07:23.647 "bdev_delay_create", 00:07:23.647 "bdev_delay_update_latency", 00:07:23.647 "bdev_zone_block_delete", 00:07:23.647 "bdev_zone_block_create", 00:07:23.647 "blobfs_create", 00:07:23.647 "blobfs_detect", 00:07:23.647 "blobfs_set_cache_size", 00:07:23.647 "bdev_aio_delete", 00:07:23.647 "bdev_aio_rescan", 00:07:23.647 "bdev_aio_create", 00:07:23.647 "bdev_ftl_set_property", 00:07:23.647 "bdev_ftl_get_properties", 00:07:23.647 "bdev_ftl_get_stats", 00:07:23.647 "bdev_ftl_unmap", 00:07:23.647 "bdev_ftl_unload", 00:07:23.647 "bdev_ftl_delete", 00:07:23.647 "bdev_ftl_load", 00:07:23.647 "bdev_ftl_create", 00:07:23.647 "bdev_virtio_attach_controller", 00:07:23.647 "bdev_virtio_scsi_get_devices", 00:07:23.647 "bdev_virtio_detach_controller", 00:07:23.647 "bdev_virtio_blk_set_hotplug", 00:07:23.647 "bdev_iscsi_delete", 00:07:23.647 "bdev_iscsi_create", 00:07:23.647 "bdev_iscsi_set_options", 00:07:23.647 "bdev_uring_delete", 00:07:23.647 "bdev_uring_rescan", 00:07:23.647 "bdev_uring_create", 00:07:23.647 "accel_error_inject_error", 00:07:23.647 "ioat_scan_accel_module", 00:07:23.647 "dsa_scan_accel_module", 00:07:23.647 "iaa_scan_accel_module", 00:07:23.647 "vfu_virtio_create_scsi_endpoint", 00:07:23.647 "vfu_virtio_scsi_remove_target", 00:07:23.647 "vfu_virtio_scsi_add_target", 00:07:23.647 "vfu_virtio_create_blk_endpoint", 00:07:23.647 "vfu_virtio_delete_endpoint", 00:07:23.647 "keyring_file_remove_key", 00:07:23.647 "keyring_file_add_key", 00:07:23.647 "keyring_linux_set_options", 00:07:23.647 "iscsi_get_histogram", 00:07:23.647 "iscsi_enable_histogram", 00:07:23.647 "iscsi_set_options", 00:07:23.647 "iscsi_get_auth_groups", 00:07:23.647 "iscsi_auth_group_remove_secret", 00:07:23.647 "iscsi_auth_group_add_secret", 00:07:23.647 "iscsi_delete_auth_group", 00:07:23.647 "iscsi_create_auth_group", 00:07:23.647 "iscsi_set_discovery_auth", 00:07:23.647 "iscsi_get_options", 00:07:23.647 "iscsi_target_node_request_logout", 00:07:23.647 "iscsi_target_node_set_redirect", 00:07:23.647 "iscsi_target_node_set_auth", 00:07:23.647 "iscsi_target_node_add_lun", 00:07:23.647 "iscsi_get_stats", 00:07:23.647 "iscsi_get_connections", 00:07:23.647 "iscsi_portal_group_set_auth", 00:07:23.647 "iscsi_start_portal_group", 00:07:23.647 "iscsi_delete_portal_group", 00:07:23.647 "iscsi_create_portal_group", 00:07:23.647 "iscsi_get_portal_groups", 00:07:23.647 "iscsi_delete_target_node", 00:07:23.647 "iscsi_target_node_remove_pg_ig_maps", 00:07:23.647 "iscsi_target_node_add_pg_ig_maps", 00:07:23.648 "iscsi_create_target_node", 00:07:23.648 "iscsi_get_target_nodes", 00:07:23.648 "iscsi_delete_initiator_group", 00:07:23.648 "iscsi_initiator_group_remove_initiators", 00:07:23.648 "iscsi_initiator_group_add_initiators", 00:07:23.648 "iscsi_create_initiator_group", 00:07:23.648 "iscsi_get_initiator_groups", 00:07:23.648 "nvmf_set_crdt", 00:07:23.648 "nvmf_set_config", 00:07:23.648 "nvmf_set_max_subsystems", 00:07:23.648 "nvmf_stop_mdns_prr", 00:07:23.648 "nvmf_publish_mdns_prr", 00:07:23.648 "nvmf_subsystem_get_listeners", 00:07:23.648 "nvmf_subsystem_get_qpairs", 00:07:23.648 "nvmf_subsystem_get_controllers", 00:07:23.648 "nvmf_get_stats", 00:07:23.648 "nvmf_get_transports", 00:07:23.648 "nvmf_create_transport", 00:07:23.648 "nvmf_get_targets", 00:07:23.648 "nvmf_delete_target", 00:07:23.648 "nvmf_create_target", 00:07:23.648 "nvmf_subsystem_allow_any_host", 00:07:23.648 "nvmf_subsystem_remove_host", 00:07:23.648 "nvmf_subsystem_add_host", 00:07:23.648 "nvmf_ns_remove_host", 00:07:23.648 "nvmf_ns_add_host", 00:07:23.648 "nvmf_subsystem_remove_ns", 00:07:23.648 "nvmf_subsystem_add_ns", 00:07:23.648 "nvmf_subsystem_listener_set_ana_state", 00:07:23.648 "nvmf_discovery_get_referrals", 00:07:23.648 "nvmf_discovery_remove_referral", 00:07:23.648 "nvmf_discovery_add_referral", 00:07:23.648 "nvmf_subsystem_remove_listener", 00:07:23.648 "nvmf_subsystem_add_listener", 00:07:23.648 "nvmf_delete_subsystem", 00:07:23.648 "nvmf_create_subsystem", 00:07:23.648 "nvmf_get_subsystems", 00:07:23.648 "env_dpdk_get_mem_stats", 00:07:23.648 "nbd_get_disks", 00:07:23.648 "nbd_stop_disk", 00:07:23.648 "nbd_start_disk", 00:07:23.648 "ublk_recover_disk", 00:07:23.648 "ublk_get_disks", 00:07:23.648 "ublk_stop_disk", 00:07:23.648 "ublk_start_disk", 00:07:23.648 "ublk_destroy_target", 00:07:23.648 "ublk_create_target", 00:07:23.648 "virtio_blk_create_transport", 00:07:23.648 "virtio_blk_get_transports", 00:07:23.648 "vhost_controller_set_coalescing", 00:07:23.648 "vhost_get_controllers", 00:07:23.648 "vhost_delete_controller", 00:07:23.648 "vhost_create_blk_controller", 00:07:23.648 "vhost_scsi_controller_remove_target", 00:07:23.648 "vhost_scsi_controller_add_target", 00:07:23.648 "vhost_start_scsi_controller", 00:07:23.648 "vhost_create_scsi_controller", 00:07:23.648 "thread_set_cpumask", 00:07:23.648 "framework_get_governor", 00:07:23.648 "framework_get_scheduler", 00:07:23.648 "framework_set_scheduler", 00:07:23.648 "framework_get_reactors", 00:07:23.648 "thread_get_io_channels", 00:07:23.648 "thread_get_pollers", 00:07:23.648 "thread_get_stats", 00:07:23.648 "framework_monitor_context_switch", 00:07:23.648 "spdk_kill_instance", 00:07:23.648 "log_enable_timestamps", 00:07:23.648 "log_get_flags", 00:07:23.648 "log_clear_flag", 00:07:23.648 "log_set_flag", 00:07:23.648 "log_get_level", 00:07:23.648 "log_set_level", 00:07:23.648 "log_get_print_level", 00:07:23.648 "log_set_print_level", 00:07:23.648 "framework_enable_cpumask_locks", 00:07:23.648 "framework_disable_cpumask_locks", 00:07:23.648 "framework_wait_init", 00:07:23.648 "framework_start_init", 00:07:23.648 "scsi_get_devices", 00:07:23.648 "bdev_get_histogram", 00:07:23.648 "bdev_enable_histogram", 00:07:23.648 "bdev_set_qos_limit", 00:07:23.648 "bdev_set_qd_sampling_period", 00:07:23.648 "bdev_get_bdevs", 00:07:23.648 "bdev_reset_iostat", 00:07:23.648 "bdev_get_iostat", 00:07:23.648 "bdev_examine", 00:07:23.648 "bdev_wait_for_examine", 00:07:23.648 "bdev_set_options", 00:07:23.648 "notify_get_notifications", 00:07:23.648 "notify_get_types", 00:07:23.648 "accel_get_stats", 00:07:23.648 "accel_set_options", 00:07:23.648 "accel_set_driver", 00:07:23.648 "accel_crypto_key_destroy", 00:07:23.648 "accel_crypto_keys_get", 00:07:23.648 "accel_crypto_key_create", 00:07:23.648 "accel_assign_opc", 00:07:23.648 "accel_get_module_info", 00:07:23.648 "accel_get_opc_assignments", 00:07:23.648 "vmd_rescan", 00:07:23.648 "vmd_remove_device", 00:07:23.648 "vmd_enable", 00:07:23.648 "sock_get_default_impl", 00:07:23.648 "sock_set_default_impl", 00:07:23.648 "sock_impl_set_options", 00:07:23.648 "sock_impl_get_options", 00:07:23.648 "iobuf_get_stats", 00:07:23.648 "iobuf_set_options", 00:07:23.648 "keyring_get_keys", 00:07:23.648 "framework_get_pci_devices", 00:07:23.648 "framework_get_config", 00:07:23.648 "framework_get_subsystems", 00:07:23.648 "vfu_tgt_set_base_path", 00:07:23.648 "trace_get_info", 00:07:23.648 "trace_get_tpoint_group_mask", 00:07:23.648 "trace_disable_tpoint_group", 00:07:23.648 "trace_enable_tpoint_group", 00:07:23.648 "trace_clear_tpoint_mask", 00:07:23.648 "trace_set_tpoint_mask", 00:07:23.648 "spdk_get_version", 00:07:23.648 "rpc_get_methods" 00:07:23.648 ] 00:07:23.648 08:49:30 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:07:23.648 08:49:30 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:23.648 08:49:30 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:23.648 08:49:30 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:07:23.648 08:49:30 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 60806 00:07:23.648 08:49:30 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 60806 ']' 00:07:23.648 08:49:30 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 60806 00:07:23.648 08:49:30 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:07:23.648 08:49:30 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:23.648 08:49:30 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60806 00:07:23.648 killing process with pid 60806 00:07:23.648 08:49:30 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:23.648 08:49:30 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:23.648 08:49:30 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60806' 00:07:23.648 08:49:30 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 60806 00:07:23.648 08:49:30 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 60806 00:07:26.196 ************************************ 00:07:26.196 END TEST spdkcli_tcp 00:07:26.196 ************************************ 00:07:26.196 00:07:26.196 real 0m4.163s 00:07:26.196 user 0m7.266s 00:07:26.196 sys 0m0.696s 00:07:26.196 08:49:32 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:26.196 08:49:32 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:26.196 08:49:32 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:26.196 08:49:32 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:26.196 08:49:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:26.196 08:49:32 -- common/autotest_common.sh@10 -- # set +x 00:07:26.196 ************************************ 00:07:26.196 START TEST dpdk_mem_utility 00:07:26.196 ************************************ 00:07:26.196 08:49:32 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:26.196 * Looking for test storage... 00:07:26.196 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:07:26.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.196 08:49:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:26.196 08:49:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=60920 00:07:26.196 08:49:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:26.197 08:49:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 60920 00:07:26.197 08:49:32 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 60920 ']' 00:07:26.197 08:49:32 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.197 08:49:32 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:26.197 08:49:32 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.197 08:49:32 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:26.197 08:49:32 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:26.197 [2024-07-25 08:49:33.070152] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:26.197 [2024-07-25 08:49:33.070325] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60920 ] 00:07:26.197 [2024-07-25 08:49:33.234807] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.457 [2024-07-25 08:49:33.475089] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.716 [2024-07-25 08:49:33.685957] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:27.284 08:49:34 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:27.284 08:49:34 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:07:27.284 08:49:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:07:27.284 08:49:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:07:27.284 08:49:34 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.284 08:49:34 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:27.284 { 00:07:27.284 "filename": "/tmp/spdk_mem_dump.txt" 00:07:27.284 } 00:07:27.284 08:49:34 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.284 08:49:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:27.284 DPDK memory size 820.000000 MiB in 1 heap(s) 00:07:27.284 1 heaps totaling size 820.000000 MiB 00:07:27.284 size: 820.000000 MiB heap id: 0 00:07:27.284 end heaps---------- 00:07:27.284 8 mempools totaling size 598.116089 MiB 00:07:27.284 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:07:27.285 size: 158.602051 MiB name: PDU_data_out_Pool 00:07:27.285 size: 84.521057 MiB name: bdev_io_60920 00:07:27.285 size: 51.011292 MiB name: evtpool_60920 00:07:27.285 size: 50.003479 MiB name: msgpool_60920 00:07:27.285 size: 21.763794 MiB name: PDU_Pool 00:07:27.285 size: 19.513306 MiB name: SCSI_TASK_Pool 00:07:27.285 size: 0.026123 MiB name: Session_Pool 00:07:27.285 end mempools------- 00:07:27.285 6 memzones totaling size 4.142822 MiB 00:07:27.285 size: 1.000366 MiB name: RG_ring_0_60920 00:07:27.285 size: 1.000366 MiB name: RG_ring_1_60920 00:07:27.285 size: 1.000366 MiB name: RG_ring_4_60920 00:07:27.285 size: 1.000366 MiB name: RG_ring_5_60920 00:07:27.285 size: 0.125366 MiB name: RG_ring_2_60920 00:07:27.285 size: 0.015991 MiB name: RG_ring_3_60920 00:07:27.285 end memzones------- 00:07:27.285 08:49:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:07:27.545 heap id: 0 total size: 820.000000 MiB number of busy elements: 300 number of free elements: 18 00:07:27.545 list of free elements. size: 18.451538 MiB 00:07:27.545 element at address: 0x200000400000 with size: 1.999451 MiB 00:07:27.545 element at address: 0x200000800000 with size: 1.996887 MiB 00:07:27.545 element at address: 0x200007000000 with size: 1.995972 MiB 00:07:27.545 element at address: 0x20000b200000 with size: 1.995972 MiB 00:07:27.545 element at address: 0x200019100040 with size: 0.999939 MiB 00:07:27.545 element at address: 0x200019500040 with size: 0.999939 MiB 00:07:27.545 element at address: 0x200019600000 with size: 0.999084 MiB 00:07:27.545 element at address: 0x200003e00000 with size: 0.996094 MiB 00:07:27.545 element at address: 0x200032200000 with size: 0.994324 MiB 00:07:27.545 element at address: 0x200018e00000 with size: 0.959656 MiB 00:07:27.545 element at address: 0x200019900040 with size: 0.936401 MiB 00:07:27.545 element at address: 0x200000200000 with size: 0.829956 MiB 00:07:27.545 element at address: 0x20001b000000 with size: 0.564148 MiB 00:07:27.545 element at address: 0x200019200000 with size: 0.487976 MiB 00:07:27.545 element at address: 0x200019a00000 with size: 0.485413 MiB 00:07:27.545 element at address: 0x200013800000 with size: 0.467896 MiB 00:07:27.545 element at address: 0x200028400000 with size: 0.390442 MiB 00:07:27.545 element at address: 0x200003a00000 with size: 0.351990 MiB 00:07:27.545 list of standard malloc elements. size: 199.284058 MiB 00:07:27.545 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:07:27.545 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:07:27.545 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:07:27.545 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:07:27.545 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:07:27.545 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:07:27.545 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:07:27.545 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:07:27.545 element at address: 0x20000b1ff040 with size: 0.000427 MiB 00:07:27.545 element at address: 0x2000199efdc0 with size: 0.000366 MiB 00:07:27.545 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:07:27.545 element at address: 0x2000002d4780 with size: 0.000244 MiB 00:07:27.545 element at address: 0x2000002d4880 with size: 0.000244 MiB 00:07:27.545 element at address: 0x2000002d4980 with size: 0.000244 MiB 00:07:27.545 element at address: 0x2000002d4a80 with size: 0.000244 MiB 00:07:27.545 element at address: 0x2000002d4b80 with size: 0.000244 MiB 00:07:27.545 element at address: 0x2000002d4c80 with size: 0.000244 MiB 00:07:27.545 element at address: 0x2000002d4d80 with size: 0.000244 MiB 00:07:27.545 element at address: 0x2000002d4e80 with size: 0.000244 MiB 00:07:27.545 element at address: 0x2000002d4f80 with size: 0.000244 MiB 00:07:27.545 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:07:27.545 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:07:27.545 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:07:27.545 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:07:27.545 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:07:27.545 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:07:27.545 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:07:27.545 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:07:27.545 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:07:27.545 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:07:27.545 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:07:27.546 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:07:27.546 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:07:27.546 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:07:27.546 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:07:27.546 element at address: 0x2000002d6100 with size: 0.000244 MiB 00:07:27.546 element at address: 0x2000002d6200 with size: 0.000244 MiB 00:07:27.546 element at address: 0x2000002d6300 with size: 0.000244 MiB 00:07:27.546 element at address: 0x2000002d6400 with size: 0.000244 MiB 00:07:27.546 element at address: 0x2000002d6500 with size: 0.000244 MiB 00:07:27.546 element at address: 0x2000002d6600 with size: 0.000244 MiB 00:07:27.546 element at address: 0x2000002d6700 with size: 0.000244 MiB 00:07:27.546 element at address: 0x2000002d6800 with size: 0.000244 MiB 00:07:27.546 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:07:27.546 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:07:27.546 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:07:27.546 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:07:27.546 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:07:27.546 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:07:27.546 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:07:27.546 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:07:27.546 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:07:27.546 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:07:27.546 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:07:27.546 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:07:27.546 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:07:27.546 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:07:27.546 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:07:27.546 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:07:27.546 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:07:27.546 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:07:27.546 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:07:27.546 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:07:27.546 element at address: 0x200003a5a1c0 with size: 0.000244 MiB 00:07:27.546 element at address: 0x200003a5a2c0 with size: 0.000244 MiB 00:07:27.546 element at address: 0x200003a5a3c0 with size: 0.000244 MiB 00:07:27.546 element at address: 0x200003a5a4c0 with size: 0.000244 MiB 00:07:27.546 element at address: 0x200003a5a5c0 with size: 0.000244 MiB 00:07:27.546 element at address: 0x200003a5a6c0 with size: 0.000244 MiB 00:07:27.546 element at address: 0x200003a5a7c0 with size: 0.000244 MiB 00:07:27.546 element at address: 0x200003a5a8c0 with size: 0.000244 MiB 00:07:27.546 element at address: 0x200003a5a9c0 with size: 0.000244 MiB 00:07:27.546 element at address: 0x200003a5aac0 with size: 0.000244 MiB 00:07:27.546 element at address: 0x200003a5abc0 with size: 0.000244 MiB 00:07:27.546 element at address: 0x200003a5acc0 with size: 0.000244 MiB 00:07:27.546 element at address: 0x200003a5adc0 with size: 0.000244 MiB 00:07:27.546 element at address: 0x200003a5aec0 with size: 0.000244 MiB 00:07:27.546 element at address: 0x200003a5afc0 with size: 0.000244 MiB 00:07:27.546 element at address: 0x200003a5b0c0 with size: 0.000244 MiB 00:07:27.546 element at address: 0x200003a5b1c0 with size: 0.000244 MiB 00:07:27.546 element at address: 0x200003aff980 with size: 0.000244 MiB 00:07:27.546 element at address: 0x200003affa80 with size: 0.000244 MiB 00:07:27.546 element at address: 0x200003eff000 with size: 0.000244 MiB 00:07:27.546 element at address: 0x20000b1ff200 with size: 0.000244 MiB 00:07:27.546 element at address: 0x20000b1ff300 with size: 0.000244 MiB 00:07:27.546 element at address: 0x20000b1ff400 with size: 0.000244 MiB 00:07:27.546 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:07:27.546 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:07:27.546 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:07:27.546 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:07:27.546 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:07:27.546 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:07:27.546 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:07:27.546 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:07:27.546 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:07:27.546 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:07:27.546 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:07:27.546 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:07:27.546 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:07:27.546 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:07:27.546 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:07:27.546 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:07:27.546 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:07:27.546 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:07:27.546 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:07:27.546 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:07:27.546 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:07:27.546 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:07:27.546 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:07:27.546 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:07:27.546 element at address: 0x200013877c80 with size: 0.000244 MiB 00:07:27.546 element at address: 0x200013877d80 with size: 0.000244 MiB 00:07:27.546 element at address: 0x200013877e80 with size: 0.000244 MiB 00:07:27.546 element at address: 0x200013877f80 with size: 0.000244 MiB 00:07:27.546 element at address: 0x200013878080 with size: 0.000244 MiB 00:07:27.546 element at address: 0x200013878180 with size: 0.000244 MiB 00:07:27.546 element at address: 0x200013878280 with size: 0.000244 MiB 00:07:27.546 element at address: 0x200013878380 with size: 0.000244 MiB 00:07:27.546 element at address: 0x200013878480 with size: 0.000244 MiB 00:07:27.546 element at address: 0x200013878580 with size: 0.000244 MiB 00:07:27.546 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:07:27.546 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:07:27.546 element at address: 0x20001927cec0 with size: 0.000244 MiB 00:07:27.546 element at address: 0x20001927cfc0 with size: 0.000244 MiB 00:07:27.546 element at address: 0x20001927d0c0 with size: 0.000244 MiB 00:07:27.546 element at address: 0x20001927d1c0 with size: 0.000244 MiB 00:07:27.546 element at address: 0x20001927d2c0 with size: 0.000244 MiB 00:07:27.546 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:07:27.546 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:07:27.546 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:07:27.546 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:07:27.546 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:07:27.546 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:07:27.546 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:07:27.546 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:07:27.546 element at address: 0x2000196ffc40 with size: 0.000244 MiB 00:07:27.546 element at address: 0x2000199efbc0 with size: 0.000244 MiB 00:07:27.546 element at address: 0x2000199efcc0 with size: 0.000244 MiB 00:07:27.546 element at address: 0x200019abc680 with size: 0.000244 MiB 00:07:27.546 element at address: 0x20001b0906c0 with size: 0.000244 MiB 00:07:27.546 element at address: 0x20001b0907c0 with size: 0.000244 MiB 00:07:27.546 element at address: 0x20001b0908c0 with size: 0.000244 MiB 00:07:27.546 element at address: 0x20001b0909c0 with size: 0.000244 MiB 00:07:27.546 element at address: 0x20001b090ac0 with size: 0.000244 MiB 00:07:27.546 element at address: 0x20001b090bc0 with size: 0.000244 MiB 00:07:27.546 element at address: 0x20001b090cc0 with size: 0.000244 MiB 00:07:27.546 element at address: 0x20001b090dc0 with size: 0.000244 MiB 00:07:27.546 element at address: 0x20001b090ec0 with size: 0.000244 MiB 00:07:27.546 element at address: 0x20001b090fc0 with size: 0.000244 MiB 00:07:27.546 element at address: 0x20001b0910c0 with size: 0.000244 MiB 00:07:27.546 element at address: 0x20001b0911c0 with size: 0.000244 MiB 00:07:27.546 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:07:27.546 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:07:27.546 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:07:27.546 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:07:27.546 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:07:27.546 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:07:27.546 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:07:27.546 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:07:27.546 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:07:27.546 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:07:27.546 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:07:27.546 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:07:27.546 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:07:27.546 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:07:27.546 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:07:27.546 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:07:27.546 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:07:27.546 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:07:27.546 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:07:27.547 element at address: 0x200028463f40 with size: 0.000244 MiB 00:07:27.547 element at address: 0x200028464040 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20002846ad00 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20002846af80 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20002846b080 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20002846b180 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20002846b280 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20002846b380 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20002846b480 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20002846b580 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20002846b680 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20002846b780 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20002846b880 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20002846b980 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20002846ba80 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20002846bb80 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20002846bc80 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20002846bd80 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20002846be80 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20002846bf80 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20002846c080 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20002846c180 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20002846c280 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20002846c380 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20002846c480 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20002846c580 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20002846c680 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20002846c780 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20002846c880 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20002846c980 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20002846ca80 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20002846cb80 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20002846cc80 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20002846cd80 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20002846ce80 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20002846cf80 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20002846d080 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20002846d180 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20002846d280 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20002846d380 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20002846d480 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20002846d580 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20002846d680 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20002846d780 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20002846d880 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20002846d980 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20002846da80 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20002846db80 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20002846de80 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20002846df80 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20002846e080 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20002846e180 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20002846e280 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20002846e380 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20002846e480 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20002846e580 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20002846e680 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20002846e780 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20002846e880 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20002846e980 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20002846f080 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20002846f180 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20002846f280 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20002846f380 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20002846f480 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20002846f580 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20002846f680 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20002846f780 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20002846f880 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20002846f980 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:07:27.547 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:07:27.547 list of memzone associated elements. size: 602.264404 MiB 00:07:27.547 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:07:27.547 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:07:27.547 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:07:27.547 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:07:27.547 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:07:27.547 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_60920_0 00:07:27.547 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:07:27.547 associated memzone info: size: 48.002930 MiB name: MP_evtpool_60920_0 00:07:27.547 element at address: 0x200003fff340 with size: 48.003113 MiB 00:07:27.547 associated memzone info: size: 48.002930 MiB name: MP_msgpool_60920_0 00:07:27.547 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:07:27.547 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:07:27.547 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:07:27.547 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:07:27.547 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:07:27.547 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_60920 00:07:27.547 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:07:27.547 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_60920 00:07:27.547 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:07:27.548 associated memzone info: size: 1.007996 MiB name: MP_evtpool_60920 00:07:27.548 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:07:27.548 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:07:27.548 element at address: 0x200019abc780 with size: 1.008179 MiB 00:07:27.548 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:07:27.548 element at address: 0x200018efde00 with size: 1.008179 MiB 00:07:27.548 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:07:27.548 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:07:27.548 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:07:27.548 element at address: 0x200003eff100 with size: 1.000549 MiB 00:07:27.548 associated memzone info: size: 1.000366 MiB name: RG_ring_0_60920 00:07:27.548 element at address: 0x200003affb80 with size: 1.000549 MiB 00:07:27.548 associated memzone info: size: 1.000366 MiB name: RG_ring_1_60920 00:07:27.548 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:07:27.548 associated memzone info: size: 1.000366 MiB name: RG_ring_4_60920 00:07:27.548 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:07:27.548 associated memzone info: size: 1.000366 MiB name: RG_ring_5_60920 00:07:27.548 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:07:27.548 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_60920 00:07:27.548 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:07:27.548 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:07:27.548 element at address: 0x200013878680 with size: 0.500549 MiB 00:07:27.548 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:07:27.548 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:07:27.548 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:07:27.548 element at address: 0x200003adf740 with size: 0.125549 MiB 00:07:27.548 associated memzone info: size: 0.125366 MiB name: RG_ring_2_60920 00:07:27.548 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:07:27.548 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:07:27.548 element at address: 0x200028464140 with size: 0.023804 MiB 00:07:27.548 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:07:27.548 element at address: 0x200003adb500 with size: 0.016174 MiB 00:07:27.548 associated memzone info: size: 0.015991 MiB name: RG_ring_3_60920 00:07:27.548 element at address: 0x20002846a2c0 with size: 0.002502 MiB 00:07:27.548 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:07:27.548 element at address: 0x2000002d5f80 with size: 0.000366 MiB 00:07:27.548 associated memzone info: size: 0.000183 MiB name: MP_msgpool_60920 00:07:27.548 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:07:27.548 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_60920 00:07:27.548 element at address: 0x20002846ae00 with size: 0.000366 MiB 00:07:27.548 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:07:27.548 08:49:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:07:27.548 08:49:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 60920 00:07:27.548 08:49:34 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 60920 ']' 00:07:27.548 08:49:34 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 60920 00:07:27.548 08:49:34 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:07:27.548 08:49:34 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:27.548 08:49:34 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60920 00:07:27.548 killing process with pid 60920 00:07:27.548 08:49:34 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:27.548 08:49:34 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:27.548 08:49:34 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60920' 00:07:27.548 08:49:34 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 60920 00:07:27.548 08:49:34 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 60920 00:07:30.078 00:07:30.078 real 0m3.947s 00:07:30.078 user 0m3.972s 00:07:30.078 sys 0m0.608s 00:07:30.078 08:49:36 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:30.078 08:49:36 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:30.078 ************************************ 00:07:30.078 END TEST dpdk_mem_utility 00:07:30.078 ************************************ 00:07:30.078 08:49:36 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:30.078 08:49:36 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:30.078 08:49:36 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:30.078 08:49:36 -- common/autotest_common.sh@10 -- # set +x 00:07:30.078 ************************************ 00:07:30.078 START TEST event 00:07:30.078 ************************************ 00:07:30.078 08:49:36 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:30.078 * Looking for test storage... 00:07:30.078 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:30.078 08:49:36 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:30.078 08:49:36 event -- bdev/nbd_common.sh@6 -- # set -e 00:07:30.078 08:49:36 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:30.078 08:49:36 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:07:30.078 08:49:36 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:30.078 08:49:36 event -- common/autotest_common.sh@10 -- # set +x 00:07:30.078 ************************************ 00:07:30.078 START TEST event_perf 00:07:30.078 ************************************ 00:07:30.079 08:49:36 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:30.079 Running I/O for 1 seconds...[2024-07-25 08:49:37.009191] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:30.079 [2024-07-25 08:49:37.009574] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61020 ] 00:07:30.079 [2024-07-25 08:49:37.185258] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:30.336 [2024-07-25 08:49:37.419129] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:30.336 [2024-07-25 08:49:37.419223] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:30.336 [2024-07-25 08:49:37.419365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.336 Running I/O for 1 seconds...[2024-07-25 08:49:37.419380] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:31.782 00:07:31.782 lcore 0: 186622 00:07:31.782 lcore 1: 186620 00:07:31.782 lcore 2: 186621 00:07:31.782 lcore 3: 186620 00:07:31.782 done. 00:07:31.782 00:07:31.782 real 0m1.846s 00:07:31.782 user 0m4.561s 00:07:31.782 sys 0m0.151s 00:07:31.782 08:49:38 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:31.782 08:49:38 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:07:31.782 ************************************ 00:07:31.782 END TEST event_perf 00:07:31.782 ************************************ 00:07:31.782 08:49:38 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:31.782 08:49:38 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:31.782 08:49:38 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:31.782 08:49:38 event -- common/autotest_common.sh@10 -- # set +x 00:07:31.782 ************************************ 00:07:31.782 START TEST event_reactor 00:07:31.782 ************************************ 00:07:31.782 08:49:38 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:31.782 [2024-07-25 08:49:38.896179] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:31.783 [2024-07-25 08:49:38.896911] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61064 ] 00:07:32.041 [2024-07-25 08:49:39.059014] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.299 [2024-07-25 08:49:39.282405] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.672 test_start 00:07:33.672 oneshot 00:07:33.672 tick 100 00:07:33.672 tick 100 00:07:33.672 tick 250 00:07:33.672 tick 100 00:07:33.672 tick 100 00:07:33.672 tick 250 00:07:33.672 tick 100 00:07:33.672 tick 500 00:07:33.672 tick 100 00:07:33.672 tick 100 00:07:33.672 tick 250 00:07:33.672 tick 100 00:07:33.672 tick 100 00:07:33.672 test_end 00:07:33.672 00:07:33.672 real 0m1.820s 00:07:33.672 user 0m1.605s 00:07:33.672 sys 0m0.104s 00:07:33.672 ************************************ 00:07:33.672 END TEST event_reactor 00:07:33.672 ************************************ 00:07:33.672 08:49:40 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:33.672 08:49:40 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:07:33.672 08:49:40 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:33.672 08:49:40 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:33.672 08:49:40 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:33.672 08:49:40 event -- common/autotest_common.sh@10 -- # set +x 00:07:33.672 ************************************ 00:07:33.672 START TEST event_reactor_perf 00:07:33.672 ************************************ 00:07:33.672 08:49:40 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:33.672 [2024-07-25 08:49:40.776477] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:33.672 [2024-07-25 08:49:40.776648] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61096 ] 00:07:33.930 [2024-07-25 08:49:40.955209] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.189 [2024-07-25 08:49:41.245464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.563 test_start 00:07:35.563 test_end 00:07:35.563 Performance: 278736 events per second 00:07:35.563 00:07:35.563 real 0m1.890s 00:07:35.563 user 0m1.660s 00:07:35.563 sys 0m0.118s 00:07:35.563 08:49:42 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:35.563 ************************************ 00:07:35.563 END TEST event_reactor_perf 00:07:35.563 ************************************ 00:07:35.564 08:49:42 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:07:35.564 08:49:42 event -- event/event.sh@49 -- # uname -s 00:07:35.564 08:49:42 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:35.564 08:49:42 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:35.564 08:49:42 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:35.564 08:49:42 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:35.564 08:49:42 event -- common/autotest_common.sh@10 -- # set +x 00:07:35.822 ************************************ 00:07:35.822 START TEST event_scheduler 00:07:35.822 ************************************ 00:07:35.822 08:49:42 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:35.822 * Looking for test storage... 00:07:35.822 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:07:35.822 08:49:42 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:35.822 08:49:42 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=61164 00:07:35.822 08:49:42 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:35.822 08:49:42 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:35.822 08:49:42 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 61164 00:07:35.822 08:49:42 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 61164 ']' 00:07:35.822 08:49:42 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.822 08:49:42 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:35.822 08:49:42 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.823 08:49:42 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:35.823 08:49:42 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:35.823 [2024-07-25 08:49:42.880047] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:35.823 [2024-07-25 08:49:42.880247] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61164 ] 00:07:36.081 [2024-07-25 08:49:43.057911] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:36.416 [2024-07-25 08:49:43.341263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.416 [2024-07-25 08:49:43.341361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:36.417 [2024-07-25 08:49:43.341475] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:36.417 [2024-07-25 08:49:43.341541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:36.983 08:49:43 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:36.983 08:49:43 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:07:36.983 08:49:43 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:36.983 08:49:43 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.983 08:49:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:36.983 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:36.983 POWER: Cannot set governor of lcore 0 to userspace 00:07:36.983 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:36.983 POWER: Cannot set governor of lcore 0 to performance 00:07:36.983 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:36.983 POWER: Cannot set governor of lcore 0 to userspace 00:07:36.983 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:36.983 POWER: Cannot set governor of lcore 0 to userspace 00:07:36.983 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:07:36.983 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:07:36.983 POWER: Unable to set Power Management Environment for lcore 0 00:07:36.983 [2024-07-25 08:49:43.864692] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:07:36.983 [2024-07-25 08:49:43.864714] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:07:36.983 [2024-07-25 08:49:43.864731] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:07:36.983 [2024-07-25 08:49:43.864807] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:07:36.983 [2024-07-25 08:49:43.864842] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:07:36.983 [2024-07-25 08:49:43.864855] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:07:36.983 08:49:43 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.983 08:49:43 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:36.983 08:49:43 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.983 08:49:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:36.983 [2024-07-25 08:49:44.088291] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:37.242 [2024-07-25 08:49:44.203409] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:37.242 08:49:44 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.242 08:49:44 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:37.242 08:49:44 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:37.242 08:49:44 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:37.242 08:49:44 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:37.242 ************************************ 00:07:37.242 START TEST scheduler_create_thread 00:07:37.242 ************************************ 00:07:37.242 08:49:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:07:37.242 08:49:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:37.242 08:49:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.242 08:49:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:37.242 2 00:07:37.242 08:49:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.242 08:49:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:37.242 08:49:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.242 08:49:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:37.242 3 00:07:37.242 08:49:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.242 08:49:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:37.242 08:49:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.242 08:49:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:37.242 4 00:07:37.242 08:49:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.242 08:49:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:37.242 08:49:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.242 08:49:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:37.242 5 00:07:37.242 08:49:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.242 08:49:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:37.242 08:49:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.242 08:49:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:37.242 6 00:07:37.242 08:49:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.242 08:49:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:37.242 08:49:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.242 08:49:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:37.242 7 00:07:37.242 08:49:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.242 08:49:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:37.242 08:49:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.242 08:49:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:37.242 8 00:07:37.242 08:49:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.242 08:49:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:37.242 08:49:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.242 08:49:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:37.242 9 00:07:37.242 08:49:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.242 08:49:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:37.242 08:49:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.242 08:49:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:37.242 10 00:07:37.242 08:49:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.242 08:49:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:37.242 08:49:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.242 08:49:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:37.242 08:49:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.242 08:49:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:37.242 08:49:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:37.242 08:49:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.242 08:49:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:37.242 08:49:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.242 08:49:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:37.242 08:49:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.242 08:49:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:37.242 08:49:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.242 08:49:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:37.242 08:49:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:37.242 08:49:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.242 08:49:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:37.242 08:49:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.242 00:07:37.242 real 0m0.107s 00:07:37.242 user 0m0.017s 00:07:37.242 sys 0m0.005s 00:07:37.242 08:49:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:37.242 08:49:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:37.242 ************************************ 00:07:37.242 END TEST scheduler_create_thread 00:07:37.242 ************************************ 00:07:37.501 08:49:44 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:37.501 08:49:44 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 61164 00:07:37.501 08:49:44 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 61164 ']' 00:07:37.501 08:49:44 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 61164 00:07:37.501 08:49:44 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:07:37.501 08:49:44 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:37.501 08:49:44 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61164 00:07:37.501 08:49:44 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:07:37.501 08:49:44 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:07:37.501 killing process with pid 61164 00:07:37.501 08:49:44 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61164' 00:07:37.501 08:49:44 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 61164 00:07:37.501 08:49:44 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 61164 00:07:37.759 [2024-07-25 08:49:44.810324] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:39.134 00:07:39.134 real 0m3.333s 00:07:39.134 user 0m4.954s 00:07:39.134 sys 0m0.526s 00:07:39.134 08:49:46 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:39.134 ************************************ 00:07:39.134 END TEST event_scheduler 00:07:39.134 ************************************ 00:07:39.134 08:49:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:39.134 08:49:46 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:39.134 08:49:46 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:39.134 08:49:46 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:39.134 08:49:46 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:39.134 08:49:46 event -- common/autotest_common.sh@10 -- # set +x 00:07:39.134 ************************************ 00:07:39.134 START TEST app_repeat 00:07:39.134 ************************************ 00:07:39.134 08:49:46 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:07:39.134 08:49:46 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:39.134 08:49:46 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:39.134 08:49:46 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:39.134 08:49:46 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:39.134 08:49:46 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:39.134 08:49:46 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:39.134 08:49:46 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:39.134 08:49:46 event.app_repeat -- event/event.sh@19 -- # repeat_pid=61254 00:07:39.134 08:49:46 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:39.134 08:49:46 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:39.134 Process app_repeat pid: 61254 00:07:39.134 08:49:46 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 61254' 00:07:39.134 08:49:46 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:39.134 spdk_app_start Round 0 00:07:39.134 08:49:46 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:39.134 08:49:46 event.app_repeat -- event/event.sh@25 -- # waitforlisten 61254 /var/tmp/spdk-nbd.sock 00:07:39.134 08:49:46 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 61254 ']' 00:07:39.134 08:49:46 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:39.134 08:49:46 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:39.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:39.134 08:49:46 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:39.134 08:49:46 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:39.134 08:49:46 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:39.134 [2024-07-25 08:49:46.143511] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:39.134 [2024-07-25 08:49:46.143743] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61254 ] 00:07:39.392 [2024-07-25 08:49:46.325021] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:39.650 [2024-07-25 08:49:46.624873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.650 [2024-07-25 08:49:46.624883] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:39.907 [2024-07-25 08:49:46.835831] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:40.165 08:49:47 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:40.165 08:49:47 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:40.165 08:49:47 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:40.422 Malloc0 00:07:40.422 08:49:47 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:40.989 Malloc1 00:07:40.989 08:49:47 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:40.989 08:49:47 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:40.989 08:49:47 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:40.989 08:49:47 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:40.989 08:49:47 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:40.989 08:49:47 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:40.989 08:49:47 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:40.989 08:49:47 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:40.989 08:49:47 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:40.989 08:49:47 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:40.989 08:49:47 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:40.989 08:49:47 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:40.989 08:49:47 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:40.989 08:49:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:40.989 08:49:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:40.989 08:49:47 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:41.247 /dev/nbd0 00:07:41.247 08:49:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:41.247 08:49:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:41.247 08:49:48 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:41.247 08:49:48 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:41.247 08:49:48 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:41.247 08:49:48 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:41.247 08:49:48 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:41.247 08:49:48 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:41.247 08:49:48 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:41.247 08:49:48 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:41.247 08:49:48 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:41.247 1+0 records in 00:07:41.247 1+0 records out 00:07:41.247 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000352997 s, 11.6 MB/s 00:07:41.247 08:49:48 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:41.247 08:49:48 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:41.247 08:49:48 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:41.247 08:49:48 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:41.248 08:49:48 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:41.248 08:49:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:41.248 08:49:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:41.248 08:49:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:41.505 /dev/nbd1 00:07:41.505 08:49:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:41.505 08:49:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:41.505 08:49:48 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:41.505 08:49:48 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:41.505 08:49:48 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:41.505 08:49:48 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:41.505 08:49:48 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:41.505 08:49:48 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:41.505 08:49:48 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:41.505 08:49:48 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:41.505 08:49:48 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:41.505 1+0 records in 00:07:41.505 1+0 records out 00:07:41.505 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000294562 s, 13.9 MB/s 00:07:41.505 08:49:48 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:41.505 08:49:48 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:41.505 08:49:48 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:41.505 08:49:48 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:41.505 08:49:48 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:41.505 08:49:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:41.505 08:49:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:41.505 08:49:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:41.505 08:49:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:41.505 08:49:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:41.812 08:49:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:41.812 { 00:07:41.812 "nbd_device": "/dev/nbd0", 00:07:41.812 "bdev_name": "Malloc0" 00:07:41.812 }, 00:07:41.812 { 00:07:41.812 "nbd_device": "/dev/nbd1", 00:07:41.812 "bdev_name": "Malloc1" 00:07:41.812 } 00:07:41.812 ]' 00:07:41.812 08:49:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:41.812 { 00:07:41.812 "nbd_device": "/dev/nbd0", 00:07:41.812 "bdev_name": "Malloc0" 00:07:41.812 }, 00:07:41.812 { 00:07:41.812 "nbd_device": "/dev/nbd1", 00:07:41.812 "bdev_name": "Malloc1" 00:07:41.812 } 00:07:41.812 ]' 00:07:41.812 08:49:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:41.812 08:49:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:41.812 /dev/nbd1' 00:07:41.812 08:49:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:41.812 08:49:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:41.812 /dev/nbd1' 00:07:41.812 08:49:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:41.812 08:49:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:41.812 08:49:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:41.812 08:49:48 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:41.812 08:49:48 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:41.812 08:49:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:41.812 08:49:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:41.812 08:49:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:41.812 08:49:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:41.812 08:49:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:41.812 08:49:48 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:41.812 256+0 records in 00:07:41.812 256+0 records out 00:07:41.812 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00823851 s, 127 MB/s 00:07:41.812 08:49:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:41.812 08:49:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:41.812 256+0 records in 00:07:41.812 256+0 records out 00:07:41.812 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0308434 s, 34.0 MB/s 00:07:41.812 08:49:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:41.812 08:49:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:41.812 256+0 records in 00:07:41.812 256+0 records out 00:07:41.812 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0333732 s, 31.4 MB/s 00:07:41.812 08:49:48 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:41.812 08:49:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:41.812 08:49:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:41.812 08:49:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:41.812 08:49:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:41.812 08:49:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:41.812 08:49:48 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:41.812 08:49:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:41.812 08:49:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:41.812 08:49:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:41.812 08:49:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:41.812 08:49:48 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:41.812 08:49:48 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:41.812 08:49:48 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:41.812 08:49:48 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:41.812 08:49:48 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:41.812 08:49:48 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:41.812 08:49:48 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:41.812 08:49:48 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:42.070 08:49:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:42.070 08:49:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:42.070 08:49:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:42.070 08:49:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:42.070 08:49:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:42.070 08:49:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:42.070 08:49:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:42.070 08:49:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:42.070 08:49:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:42.070 08:49:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:42.635 08:49:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:42.635 08:49:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:42.635 08:49:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:42.635 08:49:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:42.635 08:49:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:42.635 08:49:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:42.635 08:49:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:42.635 08:49:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:42.635 08:49:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:42.635 08:49:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:42.635 08:49:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:42.635 08:49:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:42.635 08:49:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:42.635 08:49:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:42.893 08:49:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:42.893 08:49:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:42.893 08:49:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:42.893 08:49:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:42.893 08:49:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:42.893 08:49:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:42.893 08:49:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:42.893 08:49:49 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:42.893 08:49:49 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:42.893 08:49:49 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:43.150 08:49:50 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:44.534 [2024-07-25 08:49:51.461551] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:44.793 [2024-07-25 08:49:51.710596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:44.793 [2024-07-25 08:49:51.710602] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.051 [2024-07-25 08:49:51.911673] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:45.051 [2024-07-25 08:49:51.911799] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:45.051 [2024-07-25 08:49:51.911838] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:46.423 08:49:53 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:46.423 spdk_app_start Round 1 00:07:46.423 08:49:53 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:46.423 08:49:53 event.app_repeat -- event/event.sh@25 -- # waitforlisten 61254 /var/tmp/spdk-nbd.sock 00:07:46.423 08:49:53 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 61254 ']' 00:07:46.423 08:49:53 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:46.423 08:49:53 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:46.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:46.423 08:49:53 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:46.423 08:49:53 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:46.423 08:49:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:46.423 08:49:53 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:46.423 08:49:53 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:46.423 08:49:53 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:46.681 Malloc0 00:07:46.681 08:49:53 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:47.246 Malloc1 00:07:47.246 08:49:54 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:47.246 08:49:54 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:47.246 08:49:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:47.246 08:49:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:47.246 08:49:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:47.246 08:49:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:47.246 08:49:54 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:47.246 08:49:54 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:47.246 08:49:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:47.246 08:49:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:47.246 08:49:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:47.246 08:49:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:47.246 08:49:54 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:47.246 08:49:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:47.246 08:49:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:47.246 08:49:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:47.505 /dev/nbd0 00:07:47.505 08:49:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:47.505 08:49:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:47.505 08:49:54 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:47.505 08:49:54 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:47.505 08:49:54 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:47.505 08:49:54 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:47.505 08:49:54 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:47.505 08:49:54 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:47.505 08:49:54 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:47.505 08:49:54 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:47.505 08:49:54 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:47.505 1+0 records in 00:07:47.505 1+0 records out 00:07:47.505 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000262744 s, 15.6 MB/s 00:07:47.505 08:49:54 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:47.505 08:49:54 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:47.505 08:49:54 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:47.505 08:49:54 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:47.505 08:49:54 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:47.505 08:49:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:47.505 08:49:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:47.505 08:49:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:47.764 /dev/nbd1 00:07:47.764 08:49:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:47.764 08:49:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:47.764 08:49:54 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:47.764 08:49:54 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:47.764 08:49:54 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:47.764 08:49:54 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:47.764 08:49:54 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:47.764 08:49:54 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:47.764 08:49:54 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:47.764 08:49:54 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:47.764 08:49:54 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:47.764 1+0 records in 00:07:47.764 1+0 records out 00:07:47.764 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000421636 s, 9.7 MB/s 00:07:47.764 08:49:54 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:47.764 08:49:54 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:47.764 08:49:54 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:47.764 08:49:54 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:47.764 08:49:54 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:47.764 08:49:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:47.764 08:49:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:47.764 08:49:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:47.764 08:49:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:47.764 08:49:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:48.023 08:49:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:48.023 { 00:07:48.023 "nbd_device": "/dev/nbd0", 00:07:48.023 "bdev_name": "Malloc0" 00:07:48.023 }, 00:07:48.023 { 00:07:48.023 "nbd_device": "/dev/nbd1", 00:07:48.023 "bdev_name": "Malloc1" 00:07:48.023 } 00:07:48.023 ]' 00:07:48.023 08:49:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:48.023 { 00:07:48.023 "nbd_device": "/dev/nbd0", 00:07:48.023 "bdev_name": "Malloc0" 00:07:48.023 }, 00:07:48.023 { 00:07:48.023 "nbd_device": "/dev/nbd1", 00:07:48.023 "bdev_name": "Malloc1" 00:07:48.023 } 00:07:48.023 ]' 00:07:48.023 08:49:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:48.023 08:49:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:48.023 /dev/nbd1' 00:07:48.023 08:49:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:48.023 /dev/nbd1' 00:07:48.023 08:49:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:48.023 08:49:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:48.023 08:49:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:48.023 08:49:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:48.023 08:49:55 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:48.023 08:49:55 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:48.023 08:49:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:48.023 08:49:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:48.023 08:49:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:48.023 08:49:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:48.023 08:49:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:48.023 08:49:55 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:48.023 256+0 records in 00:07:48.023 256+0 records out 00:07:48.023 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00841614 s, 125 MB/s 00:07:48.023 08:49:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:48.023 08:49:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:48.282 256+0 records in 00:07:48.282 256+0 records out 00:07:48.282 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0302263 s, 34.7 MB/s 00:07:48.282 08:49:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:48.282 08:49:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:48.282 256+0 records in 00:07:48.282 256+0 records out 00:07:48.282 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0345189 s, 30.4 MB/s 00:07:48.282 08:49:55 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:48.282 08:49:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:48.282 08:49:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:48.282 08:49:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:48.282 08:49:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:48.282 08:49:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:48.282 08:49:55 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:48.282 08:49:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:48.282 08:49:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:48.282 08:49:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:48.282 08:49:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:48.282 08:49:55 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:48.282 08:49:55 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:48.282 08:49:55 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:48.282 08:49:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:48.282 08:49:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:48.282 08:49:55 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:48.282 08:49:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:48.282 08:49:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:48.540 08:49:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:48.540 08:49:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:48.540 08:49:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:48.540 08:49:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:48.540 08:49:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:48.540 08:49:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:48.540 08:49:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:48.540 08:49:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:48.540 08:49:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:48.540 08:49:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:48.799 08:49:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:48.799 08:49:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:48.799 08:49:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:48.799 08:49:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:48.799 08:49:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:48.799 08:49:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:48.799 08:49:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:48.799 08:49:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:48.799 08:49:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:48.799 08:49:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:48.799 08:49:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:49.367 08:49:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:49.367 08:49:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:49.367 08:49:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:49.367 08:49:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:49.367 08:49:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:49.367 08:49:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:49.367 08:49:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:49.367 08:49:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:49.367 08:49:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:49.367 08:49:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:49.367 08:49:56 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:49.367 08:49:56 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:49.367 08:49:56 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:49.625 08:49:56 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:51.000 [2024-07-25 08:49:57.952675] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:51.258 [2024-07-25 08:49:58.187178] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.258 [2024-07-25 08:49:58.187180] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:51.516 [2024-07-25 08:49:58.379254] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:51.516 [2024-07-25 08:49:58.379451] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:51.516 [2024-07-25 08:49:58.379475] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:52.892 spdk_app_start Round 2 00:07:52.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:52.892 08:49:59 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:52.892 08:49:59 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:52.892 08:49:59 event.app_repeat -- event/event.sh@25 -- # waitforlisten 61254 /var/tmp/spdk-nbd.sock 00:07:52.892 08:49:59 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 61254 ']' 00:07:52.892 08:49:59 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:52.892 08:49:59 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:52.892 08:49:59 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:52.892 08:49:59 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:52.892 08:49:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:52.892 08:49:59 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:52.892 08:49:59 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:52.892 08:49:59 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:53.459 Malloc0 00:07:53.459 08:50:00 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:53.718 Malloc1 00:07:53.718 08:50:00 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:53.718 08:50:00 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:53.718 08:50:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:53.718 08:50:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:53.718 08:50:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:53.718 08:50:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:53.718 08:50:00 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:53.718 08:50:00 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:53.718 08:50:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:53.718 08:50:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:53.718 08:50:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:53.718 08:50:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:53.718 08:50:00 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:53.718 08:50:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:53.718 08:50:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:53.718 08:50:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:53.976 /dev/nbd0 00:07:53.976 08:50:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:53.976 08:50:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:53.976 08:50:00 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:53.976 08:50:00 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:53.976 08:50:00 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:53.976 08:50:00 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:53.976 08:50:00 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:53.976 08:50:00 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:53.976 08:50:00 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:53.976 08:50:00 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:53.976 08:50:00 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:53.976 1+0 records in 00:07:53.976 1+0 records out 00:07:53.976 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000299728 s, 13.7 MB/s 00:07:53.976 08:50:00 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:53.976 08:50:00 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:53.976 08:50:00 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:53.976 08:50:00 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:53.976 08:50:00 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:53.976 08:50:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:53.976 08:50:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:53.976 08:50:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:54.235 /dev/nbd1 00:07:54.235 08:50:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:54.235 08:50:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:54.235 08:50:01 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:54.235 08:50:01 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:54.235 08:50:01 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:54.235 08:50:01 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:54.235 08:50:01 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:54.235 08:50:01 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:54.235 08:50:01 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:54.235 08:50:01 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:54.235 08:50:01 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:54.235 1+0 records in 00:07:54.235 1+0 records out 00:07:54.235 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000469217 s, 8.7 MB/s 00:07:54.235 08:50:01 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:54.235 08:50:01 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:54.235 08:50:01 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:54.235 08:50:01 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:54.235 08:50:01 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:54.235 08:50:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:54.235 08:50:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:54.235 08:50:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:54.235 08:50:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:54.235 08:50:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:54.504 08:50:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:54.504 { 00:07:54.504 "nbd_device": "/dev/nbd0", 00:07:54.504 "bdev_name": "Malloc0" 00:07:54.504 }, 00:07:54.504 { 00:07:54.504 "nbd_device": "/dev/nbd1", 00:07:54.504 "bdev_name": "Malloc1" 00:07:54.504 } 00:07:54.504 ]' 00:07:54.504 08:50:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:54.504 { 00:07:54.504 "nbd_device": "/dev/nbd0", 00:07:54.504 "bdev_name": "Malloc0" 00:07:54.504 }, 00:07:54.504 { 00:07:54.504 "nbd_device": "/dev/nbd1", 00:07:54.504 "bdev_name": "Malloc1" 00:07:54.504 } 00:07:54.504 ]' 00:07:54.504 08:50:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:54.504 08:50:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:54.504 /dev/nbd1' 00:07:54.504 08:50:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:54.504 /dev/nbd1' 00:07:54.504 08:50:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:54.504 08:50:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:54.504 08:50:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:54.504 08:50:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:54.504 08:50:01 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:54.504 08:50:01 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:54.792 08:50:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:54.792 08:50:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:54.792 08:50:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:54.792 08:50:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:54.792 08:50:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:54.792 08:50:01 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:54.792 256+0 records in 00:07:54.792 256+0 records out 00:07:54.792 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00833557 s, 126 MB/s 00:07:54.792 08:50:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:54.792 08:50:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:54.792 256+0 records in 00:07:54.792 256+0 records out 00:07:54.792 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0332241 s, 31.6 MB/s 00:07:54.792 08:50:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:54.792 08:50:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:54.792 256+0 records in 00:07:54.792 256+0 records out 00:07:54.792 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.030188 s, 34.7 MB/s 00:07:54.792 08:50:01 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:54.792 08:50:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:54.792 08:50:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:54.792 08:50:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:54.792 08:50:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:54.792 08:50:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:54.792 08:50:01 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:54.792 08:50:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:54.792 08:50:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:54.792 08:50:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:54.792 08:50:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:54.792 08:50:01 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:54.792 08:50:01 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:54.792 08:50:01 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:54.792 08:50:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:54.792 08:50:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:54.792 08:50:01 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:54.792 08:50:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:54.792 08:50:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:55.051 08:50:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:55.051 08:50:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:55.051 08:50:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:55.051 08:50:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:55.051 08:50:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:55.051 08:50:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:55.051 08:50:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:55.051 08:50:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:55.051 08:50:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:55.051 08:50:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:55.309 08:50:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:55.309 08:50:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:55.309 08:50:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:55.309 08:50:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:55.309 08:50:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:55.309 08:50:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:55.309 08:50:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:55.309 08:50:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:55.309 08:50:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:55.309 08:50:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:55.309 08:50:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:55.567 08:50:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:55.567 08:50:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:55.567 08:50:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:55.567 08:50:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:55.567 08:50:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:55.567 08:50:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:55.567 08:50:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:55.567 08:50:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:55.567 08:50:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:55.567 08:50:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:55.567 08:50:02 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:55.567 08:50:02 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:55.567 08:50:02 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:55.826 08:50:02 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:57.200 [2024-07-25 08:50:04.227817] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:57.458 [2024-07-25 08:50:04.481712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:57.458 [2024-07-25 08:50:04.481729] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.716 [2024-07-25 08:50:04.685252] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:57.716 [2024-07-25 08:50:04.685499] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:57.716 [2024-07-25 08:50:04.685530] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:59.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:59.099 08:50:05 event.app_repeat -- event/event.sh@38 -- # waitforlisten 61254 /var/tmp/spdk-nbd.sock 00:07:59.099 08:50:05 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 61254 ']' 00:07:59.099 08:50:05 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:59.099 08:50:05 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:59.099 08:50:05 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:59.099 08:50:05 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:59.099 08:50:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:59.358 08:50:06 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:59.358 08:50:06 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:59.358 08:50:06 event.app_repeat -- event/event.sh@39 -- # killprocess 61254 00:07:59.358 08:50:06 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 61254 ']' 00:07:59.358 08:50:06 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 61254 00:07:59.358 08:50:06 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:07:59.358 08:50:06 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:59.358 08:50:06 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61254 00:07:59.358 08:50:06 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:59.358 killing process with pid 61254 00:07:59.358 08:50:06 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:59.358 08:50:06 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61254' 00:07:59.358 08:50:06 event.app_repeat -- common/autotest_common.sh@969 -- # kill 61254 00:07:59.358 08:50:06 event.app_repeat -- common/autotest_common.sh@974 -- # wait 61254 00:08:00.733 spdk_app_start is called in Round 0. 00:08:00.733 Shutdown signal received, stop current app iteration 00:08:00.733 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 reinitialization... 00:08:00.733 spdk_app_start is called in Round 1. 00:08:00.733 Shutdown signal received, stop current app iteration 00:08:00.733 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 reinitialization... 00:08:00.733 spdk_app_start is called in Round 2. 00:08:00.733 Shutdown signal received, stop current app iteration 00:08:00.733 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 reinitialization... 00:08:00.733 spdk_app_start is called in Round 3. 00:08:00.733 Shutdown signal received, stop current app iteration 00:08:00.733 ************************************ 00:08:00.734 END TEST app_repeat 00:08:00.734 08:50:07 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:08:00.734 08:50:07 event.app_repeat -- event/event.sh@42 -- # return 0 00:08:00.734 00:08:00.734 real 0m21.364s 00:08:00.734 user 0m45.596s 00:08:00.734 sys 0m3.160s 00:08:00.734 08:50:07 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:00.734 08:50:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:00.734 ************************************ 00:08:00.734 08:50:07 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:08:00.734 08:50:07 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:08:00.734 08:50:07 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:00.734 08:50:07 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:00.734 08:50:07 event -- common/autotest_common.sh@10 -- # set +x 00:08:00.734 ************************************ 00:08:00.734 START TEST cpu_locks 00:08:00.734 ************************************ 00:08:00.734 08:50:07 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:08:00.734 * Looking for test storage... 00:08:00.734 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:08:00.734 08:50:07 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:08:00.734 08:50:07 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:08:00.734 08:50:07 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:08:00.734 08:50:07 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:08:00.734 08:50:07 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:00.734 08:50:07 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:00.734 08:50:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:00.734 ************************************ 00:08:00.734 START TEST default_locks 00:08:00.734 ************************************ 00:08:00.734 08:50:07 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:08:00.734 08:50:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=61717 00:08:00.734 08:50:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:00.734 08:50:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 61717 00:08:00.734 08:50:07 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 61717 ']' 00:08:00.734 08:50:07 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.734 08:50:07 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:00.734 08:50:07 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.734 08:50:07 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:00.734 08:50:07 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:00.734 [2024-07-25 08:50:07.750794] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:00.734 [2024-07-25 08:50:07.751021] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61717 ] 00:08:00.992 [2024-07-25 08:50:07.930392] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.251 [2024-07-25 08:50:08.177458] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.510 [2024-07-25 08:50:08.390616] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:02.077 08:50:09 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:02.077 08:50:09 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:08:02.077 08:50:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 61717 00:08:02.077 08:50:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 61717 00:08:02.077 08:50:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:02.662 08:50:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 61717 00:08:02.662 08:50:09 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 61717 ']' 00:08:02.662 08:50:09 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 61717 00:08:02.662 08:50:09 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:08:02.662 08:50:09 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:02.662 08:50:09 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61717 00:08:02.662 killing process with pid 61717 00:08:02.662 08:50:09 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:02.662 08:50:09 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:02.662 08:50:09 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61717' 00:08:02.662 08:50:09 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 61717 00:08:02.662 08:50:09 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 61717 00:08:05.207 08:50:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 61717 00:08:05.207 08:50:11 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:08:05.207 08:50:11 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 61717 00:08:05.207 08:50:11 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:08:05.207 08:50:11 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:05.207 08:50:11 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:08:05.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:05.207 ERROR: process (pid: 61717) is no longer running 00:08:05.207 08:50:11 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:05.207 08:50:11 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 61717 00:08:05.207 08:50:11 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 61717 ']' 00:08:05.207 08:50:11 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:05.207 08:50:11 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:05.207 08:50:11 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:05.207 08:50:11 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:05.207 08:50:11 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:05.207 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (61717) - No such process 00:08:05.207 08:50:11 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:05.207 08:50:11 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:08:05.207 08:50:11 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:08:05.207 08:50:11 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:05.207 08:50:11 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:05.207 08:50:11 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:05.207 08:50:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:08:05.207 08:50:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:05.207 08:50:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:08:05.207 08:50:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:05.207 00:08:05.207 real 0m4.224s 00:08:05.207 user 0m4.155s 00:08:05.207 sys 0m0.800s 00:08:05.207 08:50:11 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:05.207 08:50:11 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:05.207 ************************************ 00:08:05.207 END TEST default_locks 00:08:05.207 ************************************ 00:08:05.207 08:50:11 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:08:05.207 08:50:11 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:05.207 08:50:11 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:05.207 08:50:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:05.207 ************************************ 00:08:05.207 START TEST default_locks_via_rpc 00:08:05.207 ************************************ 00:08:05.207 08:50:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:08:05.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:05.207 08:50:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=61792 00:08:05.207 08:50:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 61792 00:08:05.207 08:50:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 61792 ']' 00:08:05.207 08:50:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:05.207 08:50:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:05.207 08:50:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:05.207 08:50:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:05.207 08:50:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:05.207 08:50:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.207 [2024-07-25 08:50:12.027077] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:05.207 [2024-07-25 08:50:12.027322] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61792 ] 00:08:05.207 [2024-07-25 08:50:12.207263] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.465 [2024-07-25 08:50:12.446703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.724 [2024-07-25 08:50:12.664833] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:06.292 08:50:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:06.292 08:50:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:06.292 08:50:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:08:06.292 08:50:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.292 08:50:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:06.292 08:50:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.292 08:50:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:08:06.292 08:50:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:06.292 08:50:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:08:06.292 08:50:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:06.292 08:50:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:08:06.292 08:50:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.292 08:50:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:06.292 08:50:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.292 08:50:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 61792 00:08:06.292 08:50:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 61792 00:08:06.292 08:50:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:06.857 08:50:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 61792 00:08:06.857 08:50:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 61792 ']' 00:08:06.857 08:50:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 61792 00:08:06.857 08:50:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:08:06.857 08:50:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:06.857 08:50:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61792 00:08:06.857 killing process with pid 61792 00:08:06.857 08:50:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:06.857 08:50:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:06.857 08:50:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61792' 00:08:06.857 08:50:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 61792 00:08:06.857 08:50:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 61792 00:08:09.386 00:08:09.386 real 0m4.248s 00:08:09.386 user 0m4.280s 00:08:09.386 sys 0m0.764s 00:08:09.386 08:50:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:09.386 ************************************ 00:08:09.386 END TEST default_locks_via_rpc 00:08:09.386 ************************************ 00:08:09.386 08:50:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:09.386 08:50:16 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:08:09.386 08:50:16 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:09.386 08:50:16 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:09.386 08:50:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:09.386 ************************************ 00:08:09.386 START TEST non_locking_app_on_locked_coremask 00:08:09.386 ************************************ 00:08:09.386 08:50:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:08:09.386 08:50:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=61868 00:08:09.386 08:50:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 61868 /var/tmp/spdk.sock 00:08:09.386 08:50:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:09.386 08:50:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 61868 ']' 00:08:09.386 08:50:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.386 08:50:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:09.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.386 08:50:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.386 08:50:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:09.386 08:50:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:09.386 [2024-07-25 08:50:16.324015] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:09.386 [2024-07-25 08:50:16.324257] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61868 ] 00:08:09.386 [2024-07-25 08:50:16.495318] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.645 [2024-07-25 08:50:16.745236] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.903 [2024-07-25 08:50:16.949387] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:10.470 08:50:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:10.470 08:50:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:10.470 08:50:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=61884 00:08:10.470 08:50:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:08:10.470 08:50:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 61884 /var/tmp/spdk2.sock 00:08:10.470 08:50:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 61884 ']' 00:08:10.470 08:50:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:10.470 08:50:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:10.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:10.470 08:50:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:10.470 08:50:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:10.470 08:50:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:10.729 [2024-07-25 08:50:17.658802] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:10.729 [2024-07-25 08:50:17.659044] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61884 ] 00:08:10.729 [2024-07-25 08:50:17.832152] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:10.729 [2024-07-25 08:50:17.832300] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.296 [2024-07-25 08:50:18.332598] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.863 [2024-07-25 08:50:18.741231] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:13.238 08:50:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:13.238 08:50:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:13.238 08:50:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 61868 00:08:13.238 08:50:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61868 00:08:13.238 08:50:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:14.174 08:50:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 61868 00:08:14.174 08:50:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 61868 ']' 00:08:14.174 08:50:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 61868 00:08:14.174 08:50:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:14.174 08:50:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:14.174 08:50:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61868 00:08:14.174 08:50:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:14.174 killing process with pid 61868 00:08:14.174 08:50:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:14.174 08:50:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61868' 00:08:14.174 08:50:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 61868 00:08:14.174 08:50:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 61868 00:08:19.444 08:50:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 61884 00:08:19.444 08:50:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 61884 ']' 00:08:19.444 08:50:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 61884 00:08:19.444 08:50:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:19.444 08:50:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:19.444 08:50:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61884 00:08:19.444 killing process with pid 61884 00:08:19.444 08:50:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:19.444 08:50:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:19.444 08:50:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61884' 00:08:19.444 08:50:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 61884 00:08:19.444 08:50:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 61884 00:08:20.819 00:08:20.819 real 0m11.588s 00:08:20.819 user 0m11.993s 00:08:20.819 sys 0m1.446s 00:08:20.819 08:50:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:20.819 08:50:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:20.819 ************************************ 00:08:20.819 END TEST non_locking_app_on_locked_coremask 00:08:20.819 ************************************ 00:08:20.819 08:50:27 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:08:20.819 08:50:27 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:20.819 08:50:27 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:20.819 08:50:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:20.819 ************************************ 00:08:20.819 START TEST locking_app_on_unlocked_coremask 00:08:20.819 ************************************ 00:08:20.819 08:50:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:08:20.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:20.819 08:50:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=62039 00:08:20.819 08:50:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 62039 /var/tmp/spdk.sock 00:08:20.819 08:50:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:08:20.819 08:50:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 62039 ']' 00:08:20.819 08:50:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:20.819 08:50:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:20.819 08:50:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:20.819 08:50:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:20.819 08:50:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:21.080 [2024-07-25 08:50:27.973002] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:21.080 [2024-07-25 08:50:27.973288] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62039 ] 00:08:21.080 [2024-07-25 08:50:28.156580] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:21.080 [2024-07-25 08:50:28.156665] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.338 [2024-07-25 08:50:28.402205] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.596 [2024-07-25 08:50:28.608748] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:22.531 08:50:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:22.531 08:50:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:22.531 08:50:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=62056 00:08:22.531 08:50:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:22.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:22.532 08:50:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 62056 /var/tmp/spdk2.sock 00:08:22.532 08:50:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 62056 ']' 00:08:22.532 08:50:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:22.532 08:50:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:22.532 08:50:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:22.532 08:50:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:22.532 08:50:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:22.532 [2024-07-25 08:50:29.406035] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:22.532 [2024-07-25 08:50:29.406364] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62056 ] 00:08:22.532 [2024-07-25 08:50:29.581396] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.099 [2024-07-25 08:50:30.163240] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.665 [2024-07-25 08:50:30.624909] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:25.052 08:50:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:25.052 08:50:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:25.052 08:50:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 62056 00:08:25.052 08:50:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 62056 00:08:25.052 08:50:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:25.988 08:50:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 62039 00:08:25.988 08:50:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 62039 ']' 00:08:25.988 08:50:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 62039 00:08:25.988 08:50:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:25.988 08:50:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:25.988 08:50:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62039 00:08:25.988 08:50:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:25.988 08:50:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:25.988 08:50:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62039' 00:08:25.988 killing process with pid 62039 00:08:25.988 08:50:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 62039 00:08:25.988 08:50:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 62039 00:08:31.255 08:50:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 62056 00:08:31.255 08:50:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 62056 ']' 00:08:31.255 08:50:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 62056 00:08:31.255 08:50:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:31.255 08:50:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:31.255 08:50:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62056 00:08:31.255 killing process with pid 62056 00:08:31.255 08:50:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:31.255 08:50:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:31.255 08:50:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62056' 00:08:31.255 08:50:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 62056 00:08:31.255 08:50:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 62056 00:08:33.156 00:08:33.156 real 0m12.150s 00:08:33.156 user 0m12.449s 00:08:33.156 sys 0m1.518s 00:08:33.156 08:50:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:33.156 ************************************ 00:08:33.156 08:50:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:33.156 END TEST locking_app_on_unlocked_coremask 00:08:33.156 ************************************ 00:08:33.156 08:50:40 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:08:33.156 08:50:40 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:33.156 08:50:40 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:33.156 08:50:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:33.156 ************************************ 00:08:33.156 START TEST locking_app_on_locked_coremask 00:08:33.156 ************************************ 00:08:33.156 08:50:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:08:33.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:33.156 08:50:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=62215 00:08:33.156 08:50:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:33.156 08:50:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 62215 /var/tmp/spdk.sock 00:08:33.156 08:50:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 62215 ']' 00:08:33.156 08:50:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:33.156 08:50:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:33.156 08:50:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:33.156 08:50:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:33.156 08:50:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:33.156 [2024-07-25 08:50:40.127641] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:33.156 [2024-07-25 08:50:40.127805] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62215 ] 00:08:33.416 [2024-07-25 08:50:40.292513] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.674 [2024-07-25 08:50:40.543071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.674 [2024-07-25 08:50:40.745846] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:34.241 08:50:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:34.241 08:50:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:34.241 08:50:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=62231 00:08:34.241 08:50:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 62231 /var/tmp/spdk2.sock 00:08:34.241 08:50:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:08:34.241 08:50:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:34.241 08:50:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 62231 /var/tmp/spdk2.sock 00:08:34.241 08:50:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:08:34.241 08:50:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:34.241 08:50:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:08:34.241 08:50:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:34.241 08:50:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 62231 /var/tmp/spdk2.sock 00:08:34.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:34.241 08:50:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 62231 ']' 00:08:34.241 08:50:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:34.241 08:50:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:34.241 08:50:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:34.241 08:50:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:34.241 08:50:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:34.499 [2024-07-25 08:50:41.480109] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:34.499 [2024-07-25 08:50:41.480292] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62231 ] 00:08:34.758 [2024-07-25 08:50:41.657313] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 62215 has claimed it. 00:08:34.758 [2024-07-25 08:50:41.657419] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:35.017 ERROR: process (pid: 62231) is no longer running 00:08:35.017 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (62231) - No such process 00:08:35.017 08:50:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:35.017 08:50:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:08:35.017 08:50:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:08:35.017 08:50:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:35.017 08:50:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:35.017 08:50:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:35.017 08:50:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 62215 00:08:35.017 08:50:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 62215 00:08:35.017 08:50:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:35.584 08:50:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 62215 00:08:35.584 08:50:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 62215 ']' 00:08:35.584 08:50:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 62215 00:08:35.584 08:50:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:35.584 08:50:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:35.584 08:50:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62215 00:08:35.584 killing process with pid 62215 00:08:35.584 08:50:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:35.584 08:50:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:35.584 08:50:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62215' 00:08:35.584 08:50:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 62215 00:08:35.584 08:50:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 62215 00:08:38.112 00:08:38.112 real 0m4.780s 00:08:38.112 user 0m5.124s 00:08:38.112 sys 0m0.838s 00:08:38.112 08:50:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:38.112 ************************************ 00:08:38.112 END TEST locking_app_on_locked_coremask 00:08:38.112 ************************************ 00:08:38.112 08:50:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:38.112 08:50:44 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:08:38.112 08:50:44 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:38.112 08:50:44 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:38.112 08:50:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:38.112 ************************************ 00:08:38.112 START TEST locking_overlapped_coremask 00:08:38.112 ************************************ 00:08:38.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.112 08:50:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:08:38.112 08:50:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=62295 00:08:38.112 08:50:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 62295 /var/tmp/spdk.sock 00:08:38.112 08:50:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 62295 ']' 00:08:38.112 08:50:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.112 08:50:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:38.112 08:50:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.112 08:50:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:08:38.112 08:50:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:38.112 08:50:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:38.112 [2024-07-25 08:50:44.984367] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:38.112 [2024-07-25 08:50:44.985071] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62295 ] 00:08:38.112 [2024-07-25 08:50:45.159978] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:38.370 [2024-07-25 08:50:45.409491] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:38.370 [2024-07-25 08:50:45.409626] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.370 [2024-07-25 08:50:45.409632] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:38.628 [2024-07-25 08:50:45.606934] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:39.195 08:50:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:39.195 08:50:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:39.195 08:50:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=62319 00:08:39.195 08:50:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 62319 /var/tmp/spdk2.sock 00:08:39.195 08:50:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:08:39.195 08:50:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 62319 /var/tmp/spdk2.sock 00:08:39.195 08:50:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:08:39.195 08:50:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:08:39.195 08:50:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:39.195 08:50:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:08:39.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:39.195 08:50:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:39.195 08:50:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 62319 /var/tmp/spdk2.sock 00:08:39.195 08:50:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 62319 ']' 00:08:39.195 08:50:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:39.195 08:50:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:39.195 08:50:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:39.195 08:50:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:39.195 08:50:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:39.453 [2024-07-25 08:50:46.319933] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:39.453 [2024-07-25 08:50:46.320112] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62319 ] 00:08:39.453 [2024-07-25 08:50:46.500562] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 62295 has claimed it. 00:08:39.453 [2024-07-25 08:50:46.500677] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:40.018 ERROR: process (pid: 62319) is no longer running 00:08:40.018 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (62319) - No such process 00:08:40.018 08:50:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:40.018 08:50:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:08:40.018 08:50:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:08:40.018 08:50:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:40.018 08:50:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:40.018 08:50:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:40.018 08:50:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:08:40.018 08:50:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:40.018 08:50:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:40.018 08:50:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:40.018 08:50:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 62295 00:08:40.018 08:50:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 62295 ']' 00:08:40.019 08:50:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 62295 00:08:40.019 08:50:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:08:40.019 08:50:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:40.019 08:50:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62295 00:08:40.019 08:50:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:40.019 08:50:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:40.019 killing process with pid 62295 00:08:40.019 08:50:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62295' 00:08:40.019 08:50:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 62295 00:08:40.019 08:50:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 62295 00:08:42.548 00:08:42.548 real 0m4.409s 00:08:42.548 user 0m11.405s 00:08:42.548 sys 0m0.687s 00:08:42.548 08:50:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:42.548 08:50:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:42.548 ************************************ 00:08:42.548 END TEST locking_overlapped_coremask 00:08:42.548 ************************************ 00:08:42.548 08:50:49 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:08:42.548 08:50:49 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:42.548 08:50:49 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:42.548 08:50:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:42.548 ************************************ 00:08:42.548 START TEST locking_overlapped_coremask_via_rpc 00:08:42.548 ************************************ 00:08:42.548 08:50:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:08:42.548 08:50:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=62383 00:08:42.548 08:50:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 62383 /var/tmp/spdk.sock 00:08:42.548 08:50:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 62383 ']' 00:08:42.548 08:50:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:08:42.548 08:50:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:42.548 08:50:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:42.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:42.548 08:50:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:42.548 08:50:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:42.548 08:50:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:42.548 [2024-07-25 08:50:49.456145] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:42.548 [2024-07-25 08:50:49.456346] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62383 ] 00:08:42.548 [2024-07-25 08:50:49.627914] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:42.548 [2024-07-25 08:50:49.628037] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:42.806 [2024-07-25 08:50:49.864782] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:42.806 [2024-07-25 08:50:49.864950] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.806 [2024-07-25 08:50:49.864966] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:43.063 [2024-07-25 08:50:50.074636] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:43.629 08:50:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:43.629 08:50:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:43.629 08:50:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=62406 00:08:43.629 08:50:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:08:43.629 08:50:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 62406 /var/tmp/spdk2.sock 00:08:43.629 08:50:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 62406 ']' 00:08:43.629 08:50:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:43.629 08:50:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:43.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:43.629 08:50:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:43.629 08:50:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:43.629 08:50:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:43.887 [2024-07-25 08:50:50.832017] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:43.887 [2024-07-25 08:50:50.832668] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62406 ] 00:08:44.146 [2024-07-25 08:50:51.015517] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:44.146 [2024-07-25 08:50:51.015610] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:44.404 [2024-07-25 08:50:51.517273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:44.404 [2024-07-25 08:50:51.517399] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:44.404 [2024-07-25 08:50:51.517419] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:08:44.970 [2024-07-25 08:50:52.008347] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:46.867 08:50:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:46.867 08:50:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:46.867 08:50:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:08:46.867 08:50:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.867 08:50:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:46.867 08:50:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.867 08:50:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:46.867 08:50:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:08:46.867 08:50:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:46.867 08:50:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:46.867 08:50:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:46.867 08:50:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:46.867 08:50:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:46.867 08:50:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:46.867 08:50:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.867 08:50:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:46.867 [2024-07-25 08:50:53.641107] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 62383 has claimed it. 00:08:46.867 request: 00:08:46.867 { 00:08:46.867 "method": "framework_enable_cpumask_locks", 00:08:46.867 "req_id": 1 00:08:46.867 } 00:08:46.867 Got JSON-RPC error response 00:08:46.867 response: 00:08:46.867 { 00:08:46.867 "code": -32603, 00:08:46.867 "message": "Failed to claim CPU core: 2" 00:08:46.867 } 00:08:46.867 08:50:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:46.867 08:50:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:08:46.867 08:50:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:46.867 08:50:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:46.867 08:50:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:46.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:46.867 08:50:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 62383 /var/tmp/spdk.sock 00:08:46.867 08:50:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 62383 ']' 00:08:46.867 08:50:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:46.867 08:50:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:46.867 08:50:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:46.868 08:50:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:46.868 08:50:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:46.868 08:50:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:46.868 08:50:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:46.868 08:50:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 62406 /var/tmp/spdk2.sock 00:08:46.868 08:50:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 62406 ']' 00:08:46.868 08:50:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:46.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:46.868 08:50:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:46.868 08:50:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:46.868 08:50:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:46.868 08:50:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:47.125 08:50:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:47.125 08:50:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:47.125 08:50:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:08:47.125 08:50:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:47.125 08:50:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:47.125 ************************************ 00:08:47.125 END TEST locking_overlapped_coremask_via_rpc 00:08:47.125 ************************************ 00:08:47.126 08:50:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:47.126 00:08:47.126 real 0m4.912s 00:08:47.126 user 0m1.716s 00:08:47.126 sys 0m0.240s 00:08:47.126 08:50:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:47.126 08:50:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:47.383 08:50:54 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:08:47.383 08:50:54 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 62383 ]] 00:08:47.383 08:50:54 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 62383 00:08:47.383 08:50:54 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 62383 ']' 00:08:47.383 08:50:54 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 62383 00:08:47.383 08:50:54 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:08:47.383 08:50:54 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:47.383 08:50:54 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62383 00:08:47.383 killing process with pid 62383 00:08:47.383 08:50:54 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:47.383 08:50:54 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:47.383 08:50:54 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62383' 00:08:47.383 08:50:54 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 62383 00:08:47.383 08:50:54 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 62383 00:08:49.910 08:50:56 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 62406 ]] 00:08:49.910 08:50:56 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 62406 00:08:49.910 08:50:56 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 62406 ']' 00:08:49.910 08:50:56 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 62406 00:08:49.910 08:50:56 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:08:49.910 08:50:56 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:49.910 08:50:56 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62406 00:08:49.910 killing process with pid 62406 00:08:49.910 08:50:56 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:08:49.910 08:50:56 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:08:49.910 08:50:56 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62406' 00:08:49.910 08:50:56 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 62406 00:08:49.910 08:50:56 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 62406 00:08:51.811 08:50:58 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:51.811 08:50:58 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:08:51.811 08:50:58 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 62383 ]] 00:08:51.811 08:50:58 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 62383 00:08:51.811 08:50:58 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 62383 ']' 00:08:51.811 08:50:58 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 62383 00:08:51.811 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (62383) - No such process 00:08:51.811 Process with pid 62383 is not found 00:08:51.811 08:50:58 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 62383 is not found' 00:08:51.811 08:50:58 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 62406 ]] 00:08:51.811 08:50:58 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 62406 00:08:51.811 08:50:58 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 62406 ']' 00:08:51.811 08:50:58 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 62406 00:08:51.811 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (62406) - No such process 00:08:51.811 Process with pid 62406 is not found 00:08:51.812 08:50:58 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 62406 is not found' 00:08:51.812 08:50:58 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:51.812 00:08:51.812 real 0m51.377s 00:08:51.812 user 1m26.353s 00:08:51.812 sys 0m7.489s 00:08:51.812 08:50:58 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:51.812 ************************************ 00:08:51.812 08:50:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:51.812 END TEST cpu_locks 00:08:51.812 ************************************ 00:08:51.812 00:08:51.812 real 1m22.052s 00:08:51.812 user 2m24.870s 00:08:51.812 sys 0m11.804s 00:08:51.812 08:50:58 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:51.812 08:50:58 event -- common/autotest_common.sh@10 -- # set +x 00:08:51.812 ************************************ 00:08:51.812 END TEST event 00:08:51.812 ************************************ 00:08:52.070 08:50:58 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:52.070 08:50:58 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:52.070 08:50:58 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:52.070 08:50:58 -- common/autotest_common.sh@10 -- # set +x 00:08:52.070 ************************************ 00:08:52.070 START TEST thread 00:08:52.070 ************************************ 00:08:52.070 08:50:58 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:52.070 * Looking for test storage... 00:08:52.070 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:08:52.070 08:50:59 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:52.070 08:50:59 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:08:52.070 08:50:59 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:52.070 08:50:59 thread -- common/autotest_common.sh@10 -- # set +x 00:08:52.070 ************************************ 00:08:52.070 START TEST thread_poller_perf 00:08:52.070 ************************************ 00:08:52.070 08:50:59 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:52.070 [2024-07-25 08:50:59.103561] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:52.070 [2024-07-25 08:50:59.103808] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62593 ] 00:08:52.328 [2024-07-25 08:50:59.280439] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.586 [2024-07-25 08:50:59.520583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.587 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:08:53.961 ====================================== 00:08:53.961 busy:2209787338 (cyc) 00:08:53.961 total_run_count: 303000 00:08:53.961 tsc_hz: 2200000000 (cyc) 00:08:53.961 ====================================== 00:08:53.961 poller_cost: 7293 (cyc), 3315 (nsec) 00:08:53.961 00:08:53.961 real 0m1.881s 00:08:53.961 user 0m1.640s 00:08:53.961 sys 0m0.131s 00:08:53.961 08:51:00 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:53.961 08:51:00 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:53.961 ************************************ 00:08:53.961 END TEST thread_poller_perf 00:08:53.961 ************************************ 00:08:53.961 08:51:00 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:53.961 08:51:00 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:08:53.961 08:51:00 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:53.961 08:51:00 thread -- common/autotest_common.sh@10 -- # set +x 00:08:53.961 ************************************ 00:08:53.961 START TEST thread_poller_perf 00:08:53.961 ************************************ 00:08:53.961 08:51:00 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:53.961 [2024-07-25 08:51:01.031235] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:53.961 [2024-07-25 08:51:01.031492] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62630 ] 00:08:54.219 [2024-07-25 08:51:01.205162] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.477 [2024-07-25 08:51:01.468455] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.477 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:08:55.851 ====================================== 00:08:55.851 busy:2204156959 (cyc) 00:08:55.851 total_run_count: 3824000 00:08:55.851 tsc_hz: 2200000000 (cyc) 00:08:55.851 ====================================== 00:08:55.851 poller_cost: 576 (cyc), 261 (nsec) 00:08:55.851 00:08:55.851 real 0m1.896s 00:08:55.851 user 0m1.656s 00:08:55.851 sys 0m0.130s 00:08:55.851 08:51:02 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:55.851 08:51:02 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:55.851 ************************************ 00:08:55.851 END TEST thread_poller_perf 00:08:55.851 ************************************ 00:08:55.851 08:51:02 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:08:55.851 00:08:55.851 real 0m3.954s 00:08:55.851 user 0m3.350s 00:08:55.851 sys 0m0.379s 00:08:55.851 08:51:02 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:55.851 08:51:02 thread -- common/autotest_common.sh@10 -- # set +x 00:08:55.851 ************************************ 00:08:55.851 END TEST thread 00:08:55.851 ************************************ 00:08:55.851 08:51:02 -- spdk/autotest.sh@184 -- # [[ 0 -eq 1 ]] 00:08:55.851 08:51:02 -- spdk/autotest.sh@189 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:55.851 08:51:02 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:55.851 08:51:02 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:55.851 08:51:02 -- common/autotest_common.sh@10 -- # set +x 00:08:55.851 ************************************ 00:08:55.851 START TEST app_cmdline 00:08:55.851 ************************************ 00:08:55.851 08:51:02 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:56.109 * Looking for test storage... 00:08:56.109 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:56.109 08:51:03 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:56.109 08:51:03 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=62711 00:08:56.109 08:51:03 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:56.109 08:51:03 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 62711 00:08:56.109 08:51:03 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 62711 ']' 00:08:56.109 08:51:03 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:56.109 08:51:03 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:56.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:56.109 08:51:03 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:56.109 08:51:03 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:56.109 08:51:03 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:56.109 [2024-07-25 08:51:03.150078] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:56.110 [2024-07-25 08:51:03.150245] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62711 ] 00:08:56.368 [2024-07-25 08:51:03.313796] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.627 [2024-07-25 08:51:03.559218] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.885 [2024-07-25 08:51:03.761681] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:57.451 08:51:04 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:57.451 08:51:04 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:08:57.451 08:51:04 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:08:57.710 { 00:08:57.710 "version": "SPDK v24.09-pre git sha1 704257090", 00:08:57.710 "fields": { 00:08:57.710 "major": 24, 00:08:57.710 "minor": 9, 00:08:57.710 "patch": 0, 00:08:57.710 "suffix": "-pre", 00:08:57.710 "commit": "704257090" 00:08:57.710 } 00:08:57.710 } 00:08:57.710 08:51:04 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:57.710 08:51:04 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:57.710 08:51:04 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:57.710 08:51:04 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:57.710 08:51:04 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:57.710 08:51:04 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.710 08:51:04 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:57.710 08:51:04 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:57.710 08:51:04 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:57.710 08:51:04 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.710 08:51:04 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:57.710 08:51:04 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:57.710 08:51:04 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:57.710 08:51:04 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:08:57.710 08:51:04 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:57.710 08:51:04 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:57.710 08:51:04 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:57.710 08:51:04 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:57.710 08:51:04 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:57.710 08:51:04 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:57.710 08:51:04 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:57.710 08:51:04 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:57.710 08:51:04 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:57.710 08:51:04 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:57.969 request: 00:08:57.969 { 00:08:57.969 "method": "env_dpdk_get_mem_stats", 00:08:57.969 "req_id": 1 00:08:57.969 } 00:08:57.969 Got JSON-RPC error response 00:08:57.969 response: 00:08:57.969 { 00:08:57.969 "code": -32601, 00:08:57.969 "message": "Method not found" 00:08:57.969 } 00:08:57.969 08:51:04 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:08:57.969 08:51:04 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:57.969 08:51:04 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:57.969 08:51:04 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:57.969 08:51:04 app_cmdline -- app/cmdline.sh@1 -- # killprocess 62711 00:08:57.969 08:51:04 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 62711 ']' 00:08:57.969 08:51:04 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 62711 00:08:57.969 08:51:04 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:08:57.969 08:51:04 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:57.969 08:51:04 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62711 00:08:57.969 08:51:04 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:57.969 08:51:04 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:57.969 killing process with pid 62711 00:08:57.969 08:51:04 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62711' 00:08:57.969 08:51:04 app_cmdline -- common/autotest_common.sh@969 -- # kill 62711 00:08:57.969 08:51:04 app_cmdline -- common/autotest_common.sh@974 -- # wait 62711 00:09:00.502 00:09:00.502 real 0m4.235s 00:09:00.502 user 0m4.596s 00:09:00.502 sys 0m0.637s 00:09:00.502 08:51:07 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:00.502 08:51:07 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:00.502 ************************************ 00:09:00.502 END TEST app_cmdline 00:09:00.502 ************************************ 00:09:00.502 08:51:07 -- spdk/autotest.sh@190 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:09:00.502 08:51:07 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:00.502 08:51:07 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:00.502 08:51:07 -- common/autotest_common.sh@10 -- # set +x 00:09:00.502 ************************************ 00:09:00.502 START TEST version 00:09:00.502 ************************************ 00:09:00.502 08:51:07 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:09:00.502 * Looking for test storage... 00:09:00.502 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:09:00.502 08:51:07 version -- app/version.sh@17 -- # get_header_version major 00:09:00.502 08:51:07 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:00.502 08:51:07 version -- app/version.sh@14 -- # cut -f2 00:09:00.502 08:51:07 version -- app/version.sh@14 -- # tr -d '"' 00:09:00.502 08:51:07 version -- app/version.sh@17 -- # major=24 00:09:00.502 08:51:07 version -- app/version.sh@18 -- # get_header_version minor 00:09:00.502 08:51:07 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:00.502 08:51:07 version -- app/version.sh@14 -- # tr -d '"' 00:09:00.502 08:51:07 version -- app/version.sh@14 -- # cut -f2 00:09:00.502 08:51:07 version -- app/version.sh@18 -- # minor=9 00:09:00.502 08:51:07 version -- app/version.sh@19 -- # get_header_version patch 00:09:00.502 08:51:07 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:00.502 08:51:07 version -- app/version.sh@14 -- # tr -d '"' 00:09:00.502 08:51:07 version -- app/version.sh@14 -- # cut -f2 00:09:00.502 08:51:07 version -- app/version.sh@19 -- # patch=0 00:09:00.502 08:51:07 version -- app/version.sh@20 -- # get_header_version suffix 00:09:00.503 08:51:07 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:00.503 08:51:07 version -- app/version.sh@14 -- # cut -f2 00:09:00.503 08:51:07 version -- app/version.sh@14 -- # tr -d '"' 00:09:00.503 08:51:07 version -- app/version.sh@20 -- # suffix=-pre 00:09:00.503 08:51:07 version -- app/version.sh@22 -- # version=24.9 00:09:00.503 08:51:07 version -- app/version.sh@25 -- # (( patch != 0 )) 00:09:00.503 08:51:07 version -- app/version.sh@28 -- # version=24.9rc0 00:09:00.503 08:51:07 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:09:00.503 08:51:07 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:09:00.503 08:51:07 version -- app/version.sh@30 -- # py_version=24.9rc0 00:09:00.503 08:51:07 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:09:00.503 00:09:00.503 real 0m0.147s 00:09:00.503 user 0m0.073s 00:09:00.503 sys 0m0.104s 00:09:00.503 08:51:07 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:00.503 08:51:07 version -- common/autotest_common.sh@10 -- # set +x 00:09:00.503 ************************************ 00:09:00.503 END TEST version 00:09:00.503 ************************************ 00:09:00.503 08:51:07 -- spdk/autotest.sh@192 -- # '[' 0 -eq 1 ']' 00:09:00.503 08:51:07 -- spdk/autotest.sh@202 -- # uname -s 00:09:00.503 08:51:07 -- spdk/autotest.sh@202 -- # [[ Linux == Linux ]] 00:09:00.503 08:51:07 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:09:00.503 08:51:07 -- spdk/autotest.sh@203 -- # [[ 1 -eq 1 ]] 00:09:00.503 08:51:07 -- spdk/autotest.sh@209 -- # [[ 0 -eq 0 ]] 00:09:00.503 08:51:07 -- spdk/autotest.sh@210 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:09:00.503 08:51:07 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:00.503 08:51:07 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:00.503 08:51:07 -- common/autotest_common.sh@10 -- # set +x 00:09:00.503 ************************************ 00:09:00.503 START TEST spdk_dd 00:09:00.503 ************************************ 00:09:00.503 08:51:07 spdk_dd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:09:00.503 * Looking for test storage... 00:09:00.503 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:00.503 08:51:07 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:00.503 08:51:07 spdk_dd -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:00.503 08:51:07 spdk_dd -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:00.503 08:51:07 spdk_dd -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:00.503 08:51:07 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.503 08:51:07 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.503 08:51:07 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.503 08:51:07 spdk_dd -- paths/export.sh@5 -- # export PATH 00:09:00.503 08:51:07 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.503 08:51:07 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:00.761 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:00.761 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:00.761 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:00.761 08:51:07 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:09:01.022 08:51:07 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:09:01.022 08:51:07 spdk_dd -- scripts/common.sh@309 -- # local bdf bdfs 00:09:01.022 08:51:07 spdk_dd -- scripts/common.sh@310 -- # local nvmes 00:09:01.022 08:51:07 spdk_dd -- scripts/common.sh@312 -- # [[ -n '' ]] 00:09:01.022 08:51:07 spdk_dd -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:09:01.022 08:51:07 spdk_dd -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:09:01.022 08:51:07 spdk_dd -- scripts/common.sh@295 -- # local bdf= 00:09:01.022 08:51:07 spdk_dd -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:09:01.022 08:51:07 spdk_dd -- scripts/common.sh@230 -- # local class 00:09:01.022 08:51:07 spdk_dd -- scripts/common.sh@231 -- # local subclass 00:09:01.022 08:51:07 spdk_dd -- scripts/common.sh@232 -- # local progif 00:09:01.022 08:51:07 spdk_dd -- scripts/common.sh@233 -- # printf %02x 1 00:09:01.022 08:51:07 spdk_dd -- scripts/common.sh@233 -- # class=01 00:09:01.022 08:51:07 spdk_dd -- scripts/common.sh@234 -- # printf %02x 8 00:09:01.022 08:51:07 spdk_dd -- scripts/common.sh@234 -- # subclass=08 00:09:01.022 08:51:07 spdk_dd -- scripts/common.sh@235 -- # printf %02x 2 00:09:01.022 08:51:07 spdk_dd -- scripts/common.sh@235 -- # progif=02 00:09:01.022 08:51:07 spdk_dd -- scripts/common.sh@237 -- # hash lspci 00:09:01.022 08:51:07 spdk_dd -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:09:01.022 08:51:07 spdk_dd -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:09:01.022 08:51:07 spdk_dd -- scripts/common.sh@239 -- # lspci -mm -n -D 00:09:01.022 08:51:07 spdk_dd -- scripts/common.sh@240 -- # grep -i -- -p02 00:09:01.022 08:51:07 spdk_dd -- scripts/common.sh@242 -- # tr -d '"' 00:09:01.022 08:51:07 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:01.022 08:51:07 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:09:01.022 08:51:07 spdk_dd -- scripts/common.sh@15 -- # local i 00:09:01.022 08:51:07 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:09:01.022 08:51:07 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:09:01.022 08:51:07 spdk_dd -- scripts/common.sh@24 -- # return 0 00:09:01.022 08:51:07 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:09:01.022 08:51:07 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:01.022 08:51:07 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:09:01.022 08:51:07 spdk_dd -- scripts/common.sh@15 -- # local i 00:09:01.022 08:51:07 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:09:01.022 08:51:07 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:09:01.022 08:51:07 spdk_dd -- scripts/common.sh@24 -- # return 0 00:09:01.022 08:51:07 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:09:01.022 08:51:07 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:09:01.022 08:51:07 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:09:01.022 08:51:07 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:09:01.022 08:51:07 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:09:01.022 08:51:07 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:09:01.022 08:51:07 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:09:01.022 08:51:07 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:09:01.022 08:51:07 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:09:01.022 08:51:07 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:09:01.022 08:51:07 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:09:01.022 08:51:07 spdk_dd -- scripts/common.sh@325 -- # (( 2 )) 00:09:01.022 08:51:07 spdk_dd -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:09:01.022 08:51:07 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@139 -- # local lib 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@143 -- # [[ libasan.so.8 == liburing.so.* ]] 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.0 == liburing.so.* ]] 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.13.1 == liburing.so.* ]] 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.6.0 == liburing.so.* ]] 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.0 == liburing.so.* ]] 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.0 == liburing.so.* ]] 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfu_device.so.3.0 == liburing.so.* ]] 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scsi.so.9.0 == liburing.so.* ]] 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfu_tgt.so.3.0 == liburing.so.* ]] 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.1.0 == liburing.so.* ]] 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.16.0 == liburing.so.* ]] 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.4.0 == liburing.so.* ]] 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.5.0 == liburing.so.* ]] 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.10.1 == liburing.so.* ]] 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.10.0 == liburing.so.* ]] 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.1.0 == liburing.so.* ]] 00:09:01.022 08:51:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:01.023 08:51:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:09:01.023 08:51:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:01.023 08:51:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:09:01.023 08:51:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:01.023 08:51:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:09:01.023 08:51:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:01.023 08:51:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.0 == liburing.so.* ]] 00:09:01.023 08:51:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:01.023 08:51:07 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.0 == liburing.so.* ]] 00:09:01.023 08:51:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:01.023 08:51:07 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:09:01.023 08:51:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:01.023 08:51:07 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:09:01.023 08:51:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:01.023 08:51:07 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:09:01.023 08:51:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:01.023 08:51:07 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:09:01.023 08:51:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:01.023 08:51:07 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:09:01.023 08:51:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:01.023 08:51:07 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:09:01.023 08:51:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:01.023 08:51:07 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:09:01.023 08:51:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:01.023 08:51:07 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:09:01.023 08:51:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:01.023 08:51:07 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:09:01.023 08:51:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:01.023 08:51:07 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:09:01.023 08:51:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:01.023 08:51:07 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:09:01.023 08:51:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:01.023 08:51:07 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:09:01.023 08:51:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:01.023 08:51:07 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:09:01.023 08:51:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:01.023 08:51:07 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:09:01.023 08:51:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:01.023 08:51:07 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:09:01.023 08:51:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:01.023 08:51:07 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:09:01.023 08:51:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:01.023 08:51:07 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:09:01.023 08:51:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:01.023 08:51:07 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:09:01.023 08:51:07 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:01.023 08:51:07 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:09:01.023 08:51:07 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:09:01.023 * spdk_dd linked to liburing 00:09:01.023 08:51:07 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:09:01.023 08:51:07 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:09:01.023 08:51:07 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:09:01.023 08:51:07 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:09:01.023 08:51:07 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:09:01.023 08:51:07 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:09:01.023 08:51:07 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:09:01.023 08:51:07 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:09:01.023 08:51:07 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:09:01.023 08:51:07 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:09:01.023 08:51:07 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:09:01.023 08:51:07 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:09:01.023 08:51:07 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:09:01.023 08:51:07 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:09:01.023 08:51:07 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:09:01.023 08:51:07 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:09:01.023 08:51:07 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:09:01.023 08:51:07 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:09:01.023 08:51:07 spdk_dd -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:09:01.023 08:51:07 spdk_dd -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:09:01.023 08:51:07 spdk_dd -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:09:01.023 08:51:07 spdk_dd -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:09:01.023 08:51:07 spdk_dd -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:09:01.023 08:51:07 spdk_dd -- common/build_config.sh@22 -- # CONFIG_CET=n 00:09:01.023 08:51:07 spdk_dd -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:09:01.023 08:51:07 spdk_dd -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:09:01.023 08:51:07 spdk_dd -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:09:01.023 08:51:07 spdk_dd -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:09:01.023 08:51:07 spdk_dd -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:09:01.023 08:51:07 spdk_dd -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:09:01.023 08:51:07 spdk_dd -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:09:01.023 08:51:07 spdk_dd -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:09:01.023 08:51:07 spdk_dd -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:09:01.023 08:51:07 spdk_dd -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:09:01.023 08:51:07 spdk_dd -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:09:01.023 08:51:07 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:09:01.023 08:51:07 spdk_dd -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:09:01.023 08:51:07 spdk_dd -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:09:01.023 08:51:07 spdk_dd -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:09:01.023 08:51:07 spdk_dd -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:09:01.023 08:51:07 spdk_dd -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:09:01.023 08:51:07 spdk_dd -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:09:01.023 08:51:07 spdk_dd -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:09:01.023 08:51:07 spdk_dd -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:09:01.023 08:51:07 spdk_dd -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:09:01.023 08:51:07 spdk_dd -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:09:01.023 08:51:07 spdk_dd -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:09:01.023 08:51:07 spdk_dd -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:09:01.023 08:51:07 spdk_dd -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:09:01.023 08:51:07 spdk_dd -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:09:01.023 08:51:07 spdk_dd -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:09:01.023 08:51:07 spdk_dd -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:09:01.023 08:51:07 spdk_dd -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:09:01.023 08:51:07 spdk_dd -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:09:01.023 08:51:07 spdk_dd -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:09:01.023 08:51:07 spdk_dd -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:09:01.023 08:51:07 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=y 00:09:01.023 08:51:07 spdk_dd -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:09:01.023 08:51:07 spdk_dd -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:09:01.023 08:51:07 spdk_dd -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:09:01.023 08:51:07 spdk_dd -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:09:01.023 08:51:07 spdk_dd -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:09:01.023 08:51:07 spdk_dd -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:09:01.023 08:51:07 spdk_dd -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:09:01.023 08:51:07 spdk_dd -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:09:01.023 08:51:07 spdk_dd -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:09:01.023 08:51:07 spdk_dd -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:09:01.023 08:51:07 spdk_dd -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:09:01.023 08:51:07 spdk_dd -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:09:01.023 08:51:07 spdk_dd -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:09:01.023 08:51:07 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:09:01.023 08:51:07 spdk_dd -- common/build_config.sh@70 -- # CONFIG_FC=n 00:09:01.023 08:51:07 spdk_dd -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:09:01.023 08:51:07 spdk_dd -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:09:01.023 08:51:07 spdk_dd -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:09:01.023 08:51:07 spdk_dd -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:09:01.023 08:51:07 spdk_dd -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:09:01.023 08:51:07 spdk_dd -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:09:01.023 08:51:07 spdk_dd -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:09:01.023 08:51:07 spdk_dd -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:09:01.023 08:51:07 spdk_dd -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:09:01.023 08:51:07 spdk_dd -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:09:01.023 08:51:07 spdk_dd -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:09:01.023 08:51:07 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:09:01.023 08:51:07 spdk_dd -- common/build_config.sh@83 -- # CONFIG_URING=y 00:09:01.023 08:51:07 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:09:01.023 08:51:07 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:09:01.023 08:51:07 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:09:01.023 08:51:07 spdk_dd -- dd/common.sh@153 -- # return 0 00:09:01.023 08:51:07 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:09:01.023 08:51:07 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:09:01.023 08:51:07 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:01.023 08:51:07 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:01.023 08:51:07 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:09:01.023 ************************************ 00:09:01.023 START TEST spdk_dd_basic_rw 00:09:01.023 ************************************ 00:09:01.023 08:51:07 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:09:01.023 * Looking for test storage... 00:09:01.023 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:01.023 08:51:08 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:01.023 08:51:08 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:01.023 08:51:08 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:01.023 08:51:08 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:01.023 08:51:08 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.023 08:51:08 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.023 08:51:08 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.023 08:51:08 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:09:01.023 08:51:08 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.023 08:51:08 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:09:01.023 08:51:08 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:09:01.023 08:51:08 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:09:01.023 08:51:08 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:09:01.023 08:51:08 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:09:01.023 08:51:08 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:09:01.023 08:51:08 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:09:01.023 08:51:08 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:01.023 08:51:08 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:01.023 08:51:08 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:09:01.023 08:51:08 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:09:01.023 08:51:08 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:09:01.023 08:51:08 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:09:01.292 08:51:08 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:09:01.292 08:51:08 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:09:01.293 08:51:08 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:09:01.293 08:51:08 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:09:01.293 08:51:08 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:09:01.293 08:51:08 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:09:01.293 08:51:08 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:09:01.293 08:51:08 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:09:01.293 08:51:08 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:09:01.293 08:51:08 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:01.293 08:51:08 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:09:01.293 08:51:08 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:09:01.293 08:51:08 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:09:01.293 08:51:08 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:09:01.293 ************************************ 00:09:01.293 START TEST dd_bs_lt_native_bs 00:09:01.293 ************************************ 00:09:01.293 08:51:08 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1125 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:09:01.293 08:51:08 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@650 -- # local es=0 00:09:01.293 08:51:08 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:09:01.293 08:51:08 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:01.293 08:51:08 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:01.293 08:51:08 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:01.293 08:51:08 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:01.293 08:51:08 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:01.293 08:51:08 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:01.293 08:51:08 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:01.293 08:51:08 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:01.293 08:51:08 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:09:01.551 { 00:09:01.551 "subsystems": [ 00:09:01.551 { 00:09:01.551 "subsystem": "bdev", 00:09:01.551 "config": [ 00:09:01.551 { 00:09:01.551 "params": { 00:09:01.551 "trtype": "pcie", 00:09:01.551 "traddr": "0000:00:10.0", 00:09:01.551 "name": "Nvme0" 00:09:01.551 }, 00:09:01.551 "method": "bdev_nvme_attach_controller" 00:09:01.551 }, 00:09:01.551 { 00:09:01.551 "method": "bdev_wait_for_examine" 00:09:01.551 } 00:09:01.551 ] 00:09:01.551 } 00:09:01.551 ] 00:09:01.551 } 00:09:01.551 [2024-07-25 08:51:08.505196] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:01.551 [2024-07-25 08:51:08.505366] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63060 ] 00:09:01.809 [2024-07-25 08:51:08.683616] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.068 [2024-07-25 08:51:08.962362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.068 [2024-07-25 08:51:09.166056] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:02.326 [2024-07-25 08:51:09.353054] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:09:02.326 [2024-07-25 08:51:09.353164] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:02.891 [2024-07-25 08:51:09.885213] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:03.457 08:51:10 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@653 -- # es=234 00:09:03.457 08:51:10 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:03.457 08:51:10 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@662 -- # es=106 00:09:03.457 08:51:10 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # case "$es" in 00:09:03.457 08:51:10 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@670 -- # es=1 00:09:03.457 08:51:10 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:03.457 00:09:03.457 real 0m1.932s 00:09:03.457 user 0m1.585s 00:09:03.457 sys 0m0.294s 00:09:03.457 08:51:10 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:03.457 ************************************ 00:09:03.457 08:51:10 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:09:03.457 END TEST dd_bs_lt_native_bs 00:09:03.457 ************************************ 00:09:03.457 08:51:10 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:09:03.457 08:51:10 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:03.457 08:51:10 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:03.457 08:51:10 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:09:03.457 ************************************ 00:09:03.457 START TEST dd_rw 00:09:03.457 ************************************ 00:09:03.457 08:51:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1125 -- # basic_rw 4096 00:09:03.457 08:51:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:09:03.457 08:51:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:09:03.457 08:51:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:09:03.457 08:51:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:09:03.457 08:51:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:09:03.457 08:51:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:09:03.457 08:51:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:09:03.457 08:51:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:09:03.457 08:51:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:09:03.457 08:51:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:09:03.457 08:51:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:09:03.457 08:51:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:09:03.457 08:51:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:09:03.457 08:51:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:09:03.457 08:51:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:09:03.458 08:51:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:09:03.458 08:51:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:09:03.458 08:51:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:04.024 08:51:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:09:04.024 08:51:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:09:04.024 08:51:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:09:04.024 08:51:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:04.024 { 00:09:04.024 "subsystems": [ 00:09:04.024 { 00:09:04.024 "subsystem": "bdev", 00:09:04.024 "config": [ 00:09:04.024 { 00:09:04.024 "params": { 00:09:04.024 "trtype": "pcie", 00:09:04.024 "traddr": "0000:00:10.0", 00:09:04.024 "name": "Nvme0" 00:09:04.024 }, 00:09:04.024 "method": "bdev_nvme_attach_controller" 00:09:04.024 }, 00:09:04.024 { 00:09:04.024 "method": "bdev_wait_for_examine" 00:09:04.024 } 00:09:04.024 ] 00:09:04.024 } 00:09:04.024 ] 00:09:04.024 } 00:09:04.283 [2024-07-25 08:51:11.153224] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:04.283 [2024-07-25 08:51:11.153459] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63114 ] 00:09:04.283 [2024-07-25 08:51:11.323560] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.541 [2024-07-25 08:51:11.569935] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.800 [2024-07-25 08:51:11.771433] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:06.083  Copying: 60/60 [kB] (average 19 MBps) 00:09:06.083 00:09:06.083 08:51:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:09:06.083 08:51:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:09:06.083 08:51:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:09:06.083 08:51:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:06.341 { 00:09:06.341 "subsystems": [ 00:09:06.341 { 00:09:06.341 "subsystem": "bdev", 00:09:06.341 "config": [ 00:09:06.341 { 00:09:06.341 "params": { 00:09:06.341 "trtype": "pcie", 00:09:06.341 "traddr": "0000:00:10.0", 00:09:06.341 "name": "Nvme0" 00:09:06.341 }, 00:09:06.341 "method": "bdev_nvme_attach_controller" 00:09:06.341 }, 00:09:06.341 { 00:09:06.341 "method": "bdev_wait_for_examine" 00:09:06.341 } 00:09:06.341 ] 00:09:06.341 } 00:09:06.341 ] 00:09:06.341 } 00:09:06.341 [2024-07-25 08:51:13.232897] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:06.341 [2024-07-25 08:51:13.233077] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63145 ] 00:09:06.341 [2024-07-25 08:51:13.401711] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.600 [2024-07-25 08:51:13.681638] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.857 [2024-07-25 08:51:13.886346] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:08.054  Copying: 60/60 [kB] (average 14 MBps) 00:09:08.054 00:09:08.054 08:51:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:08.054 08:51:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:09:08.054 08:51:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:09:08.054 08:51:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:09:08.054 08:51:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:09:08.054 08:51:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:09:08.054 08:51:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:09:08.054 08:51:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:09:08.054 08:51:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:09:08.054 08:51:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:09:08.054 08:51:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:08.054 { 00:09:08.054 "subsystems": [ 00:09:08.054 { 00:09:08.054 "subsystem": "bdev", 00:09:08.054 "config": [ 00:09:08.054 { 00:09:08.054 "params": { 00:09:08.054 "trtype": "pcie", 00:09:08.054 "traddr": "0000:00:10.0", 00:09:08.054 "name": "Nvme0" 00:09:08.054 }, 00:09:08.054 "method": "bdev_nvme_attach_controller" 00:09:08.054 }, 00:09:08.054 { 00:09:08.054 "method": "bdev_wait_for_examine" 00:09:08.054 } 00:09:08.054 ] 00:09:08.054 } 00:09:08.054 ] 00:09:08.054 } 00:09:08.054 [2024-07-25 08:51:15.137440] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:08.054 [2024-07-25 08:51:15.137631] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63178 ] 00:09:08.312 [2024-07-25 08:51:15.304401] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.570 [2024-07-25 08:51:15.542028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.828 [2024-07-25 08:51:15.743555] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:10.203  Copying: 1024/1024 [kB] (average 500 MBps) 00:09:10.203 00:09:10.203 08:51:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:09:10.203 08:51:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:09:10.203 08:51:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:09:10.203 08:51:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:09:10.203 08:51:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:09:10.203 08:51:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:09:10.203 08:51:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:10.769 08:51:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:09:10.769 08:51:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:09:10.769 08:51:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:09:10.769 08:51:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:10.769 { 00:09:10.769 "subsystems": [ 00:09:10.769 { 00:09:10.769 "subsystem": "bdev", 00:09:10.769 "config": [ 00:09:10.769 { 00:09:10.769 "params": { 00:09:10.769 "trtype": "pcie", 00:09:10.769 "traddr": "0000:00:10.0", 00:09:10.769 "name": "Nvme0" 00:09:10.769 }, 00:09:10.769 "method": "bdev_nvme_attach_controller" 00:09:10.769 }, 00:09:10.769 { 00:09:10.769 "method": "bdev_wait_for_examine" 00:09:10.769 } 00:09:10.769 ] 00:09:10.769 } 00:09:10.769 ] 00:09:10.769 } 00:09:10.769 [2024-07-25 08:51:17.876607] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:10.769 [2024-07-25 08:51:17.876770] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63215 ] 00:09:11.026 [2024-07-25 08:51:18.043936] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.284 [2024-07-25 08:51:18.343834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.541 [2024-07-25 08:51:18.552812] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:12.732  Copying: 60/60 [kB] (average 58 MBps) 00:09:12.732 00:09:12.732 08:51:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:09:12.732 08:51:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:09:12.732 08:51:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:09:12.732 08:51:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:12.991 { 00:09:12.991 "subsystems": [ 00:09:12.991 { 00:09:12.991 "subsystem": "bdev", 00:09:12.991 "config": [ 00:09:12.991 { 00:09:12.991 "params": { 00:09:12.991 "trtype": "pcie", 00:09:12.991 "traddr": "0000:00:10.0", 00:09:12.991 "name": "Nvme0" 00:09:12.991 }, 00:09:12.991 "method": "bdev_nvme_attach_controller" 00:09:12.991 }, 00:09:12.991 { 00:09:12.991 "method": "bdev_wait_for_examine" 00:09:12.991 } 00:09:12.991 ] 00:09:12.991 } 00:09:12.991 ] 00:09:12.991 } 00:09:12.991 [2024-07-25 08:51:19.915600] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:12.991 [2024-07-25 08:51:19.915774] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63246 ] 00:09:12.991 [2024-07-25 08:51:20.083066] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.251 [2024-07-25 08:51:20.360133] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.514 [2024-07-25 08:51:20.566439] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:15.144  Copying: 60/60 [kB] (average 58 MBps) 00:09:15.144 00:09:15.144 08:51:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:15.144 08:51:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:09:15.144 08:51:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:09:15.144 08:51:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:09:15.144 08:51:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:09:15.144 08:51:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:09:15.144 08:51:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:09:15.144 08:51:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:09:15.144 08:51:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:09:15.144 08:51:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:09:15.144 08:51:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:15.144 { 00:09:15.144 "subsystems": [ 00:09:15.144 { 00:09:15.144 "subsystem": "bdev", 00:09:15.144 "config": [ 00:09:15.144 { 00:09:15.144 "params": { 00:09:15.144 "trtype": "pcie", 00:09:15.144 "traddr": "0000:00:10.0", 00:09:15.144 "name": "Nvme0" 00:09:15.144 }, 00:09:15.144 "method": "bdev_nvme_attach_controller" 00:09:15.144 }, 00:09:15.144 { 00:09:15.144 "method": "bdev_wait_for_examine" 00:09:15.144 } 00:09:15.144 ] 00:09:15.144 } 00:09:15.144 ] 00:09:15.144 } 00:09:15.144 [2024-07-25 08:51:22.060199] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:15.144 [2024-07-25 08:51:22.060366] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63279 ] 00:09:15.144 [2024-07-25 08:51:22.229782] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.402 [2024-07-25 08:51:22.480970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.661 [2024-07-25 08:51:22.686526] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:16.861  Copying: 1024/1024 [kB] (average 500 MBps) 00:09:16.861 00:09:16.861 08:51:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:09:16.861 08:51:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:09:16.861 08:51:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:09:16.861 08:51:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:09:16.861 08:51:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:09:16.861 08:51:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:09:16.861 08:51:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:09:16.861 08:51:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:17.428 08:51:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:09:17.428 08:51:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:09:17.428 08:51:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:09:17.428 08:51:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:17.687 { 00:09:17.687 "subsystems": [ 00:09:17.687 { 00:09:17.687 "subsystem": "bdev", 00:09:17.687 "config": [ 00:09:17.687 { 00:09:17.687 "params": { 00:09:17.687 "trtype": "pcie", 00:09:17.687 "traddr": "0000:00:10.0", 00:09:17.687 "name": "Nvme0" 00:09:17.687 }, 00:09:17.687 "method": "bdev_nvme_attach_controller" 00:09:17.687 }, 00:09:17.687 { 00:09:17.687 "method": "bdev_wait_for_examine" 00:09:17.687 } 00:09:17.687 ] 00:09:17.687 } 00:09:17.687 ] 00:09:17.687 } 00:09:17.687 [2024-07-25 08:51:24.622232] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:17.687 [2024-07-25 08:51:24.622409] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63314 ] 00:09:17.687 [2024-07-25 08:51:24.792068] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.944 [2024-07-25 08:51:25.058679] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.203 [2024-07-25 08:51:25.272971] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:19.939  Copying: 56/56 [kB] (average 27 MBps) 00:09:19.939 00:09:19.939 08:51:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:09:19.939 08:51:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:09:19.939 08:51:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:09:19.939 08:51:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:19.939 { 00:09:19.939 "subsystems": [ 00:09:19.939 { 00:09:19.939 "subsystem": "bdev", 00:09:19.939 "config": [ 00:09:19.939 { 00:09:19.939 "params": { 00:09:19.939 "trtype": "pcie", 00:09:19.939 "traddr": "0000:00:10.0", 00:09:19.939 "name": "Nvme0" 00:09:19.939 }, 00:09:19.939 "method": "bdev_nvme_attach_controller" 00:09:19.939 }, 00:09:19.939 { 00:09:19.939 "method": "bdev_wait_for_examine" 00:09:19.939 } 00:09:19.939 ] 00:09:19.939 } 00:09:19.939 ] 00:09:19.939 } 00:09:19.939 [2024-07-25 08:51:26.764004] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:19.939 [2024-07-25 08:51:26.764166] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63346 ] 00:09:19.939 [2024-07-25 08:51:26.930533] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.197 [2024-07-25 08:51:27.220838] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.455 [2024-07-25 08:51:27.438763] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:21.648  Copying: 56/56 [kB] (average 27 MBps) 00:09:21.648 00:09:21.648 08:51:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:21.648 08:51:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:09:21.648 08:51:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:09:21.648 08:51:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:09:21.648 08:51:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:09:21.648 08:51:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:09:21.648 08:51:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:09:21.648 08:51:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:09:21.648 08:51:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:09:21.648 08:51:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:09:21.648 08:51:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:21.648 { 00:09:21.648 "subsystems": [ 00:09:21.648 { 00:09:21.648 "subsystem": "bdev", 00:09:21.648 "config": [ 00:09:21.648 { 00:09:21.648 "params": { 00:09:21.648 "trtype": "pcie", 00:09:21.648 "traddr": "0000:00:10.0", 00:09:21.648 "name": "Nvme0" 00:09:21.648 }, 00:09:21.648 "method": "bdev_nvme_attach_controller" 00:09:21.648 }, 00:09:21.648 { 00:09:21.648 "method": "bdev_wait_for_examine" 00:09:21.648 } 00:09:21.648 ] 00:09:21.648 } 00:09:21.648 ] 00:09:21.648 } 00:09:21.648 [2024-07-25 08:51:28.714707] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:21.648 [2024-07-25 08:51:28.714889] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63374 ] 00:09:21.906 [2024-07-25 08:51:28.881457] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.165 [2024-07-25 08:51:29.120620] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.423 [2024-07-25 08:51:29.344943] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:23.842  Copying: 1024/1024 [kB] (average 1000 MBps) 00:09:23.842 00:09:23.842 08:51:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:09:23.842 08:51:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:09:23.842 08:51:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:09:23.842 08:51:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:09:23.842 08:51:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:09:23.842 08:51:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:09:23.842 08:51:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:24.408 08:51:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:09:24.408 08:51:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:09:24.408 08:51:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:09:24.408 08:51:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:24.408 { 00:09:24.408 "subsystems": [ 00:09:24.408 { 00:09:24.408 "subsystem": "bdev", 00:09:24.408 "config": [ 00:09:24.408 { 00:09:24.408 "params": { 00:09:24.408 "trtype": "pcie", 00:09:24.408 "traddr": "0000:00:10.0", 00:09:24.408 "name": "Nvme0" 00:09:24.408 }, 00:09:24.408 "method": "bdev_nvme_attach_controller" 00:09:24.408 }, 00:09:24.408 { 00:09:24.408 "method": "bdev_wait_for_examine" 00:09:24.408 } 00:09:24.408 ] 00:09:24.408 } 00:09:24.408 ] 00:09:24.408 } 00:09:24.408 [2024-07-25 08:51:31.491905] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:24.408 [2024-07-25 08:51:31.492099] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63416 ] 00:09:24.666 [2024-07-25 08:51:31.669137] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.924 [2024-07-25 08:51:31.965283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.182 [2024-07-25 08:51:32.173008] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:26.379  Copying: 56/56 [kB] (average 54 MBps) 00:09:26.380 00:09:26.380 08:51:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:09:26.380 08:51:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:09:26.380 08:51:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:09:26.380 08:51:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:26.380 { 00:09:26.380 "subsystems": [ 00:09:26.380 { 00:09:26.380 "subsystem": "bdev", 00:09:26.380 "config": [ 00:09:26.380 { 00:09:26.380 "params": { 00:09:26.380 "trtype": "pcie", 00:09:26.380 "traddr": "0000:00:10.0", 00:09:26.380 "name": "Nvme0" 00:09:26.380 }, 00:09:26.380 "method": "bdev_nvme_attach_controller" 00:09:26.380 }, 00:09:26.380 { 00:09:26.380 "method": "bdev_wait_for_examine" 00:09:26.380 } 00:09:26.380 ] 00:09:26.380 } 00:09:26.380 ] 00:09:26.380 } 00:09:26.380 [2024-07-25 08:51:33.488116] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:26.380 [2024-07-25 08:51:33.488328] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63448 ] 00:09:26.637 [2024-07-25 08:51:33.665161] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.895 [2024-07-25 08:51:33.957758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.153 [2024-07-25 08:51:34.182056] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:28.785  Copying: 56/56 [kB] (average 54 MBps) 00:09:28.785 00:09:28.785 08:51:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:28.785 08:51:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:09:28.785 08:51:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:09:28.785 08:51:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:09:28.785 08:51:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:09:28.785 08:51:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:09:28.785 08:51:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:09:28.785 08:51:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:09:28.785 08:51:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:09:28.785 08:51:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:09:28.785 08:51:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:28.785 { 00:09:28.785 "subsystems": [ 00:09:28.785 { 00:09:28.785 "subsystem": "bdev", 00:09:28.785 "config": [ 00:09:28.785 { 00:09:28.785 "params": { 00:09:28.785 "trtype": "pcie", 00:09:28.785 "traddr": "0000:00:10.0", 00:09:28.785 "name": "Nvme0" 00:09:28.785 }, 00:09:28.785 "method": "bdev_nvme_attach_controller" 00:09:28.785 }, 00:09:28.785 { 00:09:28.785 "method": "bdev_wait_for_examine" 00:09:28.785 } 00:09:28.785 ] 00:09:28.785 } 00:09:28.785 ] 00:09:28.785 } 00:09:28.785 [2024-07-25 08:51:35.684119] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:28.785 [2024-07-25 08:51:35.684321] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63481 ] 00:09:28.785 [2024-07-25 08:51:35.860602] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.044 [2024-07-25 08:51:36.103417] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.302 [2024-07-25 08:51:36.308240] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:30.493  Copying: 1024/1024 [kB] (average 1000 MBps) 00:09:30.493 00:09:30.493 08:51:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:09:30.493 08:51:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:09:30.493 08:51:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:09:30.493 08:51:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:09:30.493 08:51:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:09:30.493 08:51:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:09:30.493 08:51:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:09:30.493 08:51:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:31.059 08:51:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:09:31.059 08:51:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:09:31.059 08:51:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:09:31.059 08:51:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:31.059 { 00:09:31.059 "subsystems": [ 00:09:31.059 { 00:09:31.059 "subsystem": "bdev", 00:09:31.059 "config": [ 00:09:31.059 { 00:09:31.059 "params": { 00:09:31.059 "trtype": "pcie", 00:09:31.059 "traddr": "0000:00:10.0", 00:09:31.059 "name": "Nvme0" 00:09:31.059 }, 00:09:31.059 "method": "bdev_nvme_attach_controller" 00:09:31.059 }, 00:09:31.059 { 00:09:31.059 "method": "bdev_wait_for_examine" 00:09:31.059 } 00:09:31.059 ] 00:09:31.059 } 00:09:31.059 ] 00:09:31.059 } 00:09:31.059 [2024-07-25 08:51:38.107510] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:31.059 [2024-07-25 08:51:38.107925] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63512 ] 00:09:31.317 [2024-07-25 08:51:38.274715] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.574 [2024-07-25 08:51:38.516186] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.832 [2024-07-25 08:51:38.722881] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:33.207  Copying: 48/48 [kB] (average 46 MBps) 00:09:33.207 00:09:33.207 08:51:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:09:33.207 08:51:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:09:33.207 08:51:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:09:33.207 08:51:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:33.207 { 00:09:33.207 "subsystems": [ 00:09:33.207 { 00:09:33.207 "subsystem": "bdev", 00:09:33.207 "config": [ 00:09:33.207 { 00:09:33.207 "params": { 00:09:33.207 "trtype": "pcie", 00:09:33.207 "traddr": "0000:00:10.0", 00:09:33.207 "name": "Nvme0" 00:09:33.207 }, 00:09:33.207 "method": "bdev_nvme_attach_controller" 00:09:33.207 }, 00:09:33.207 { 00:09:33.207 "method": "bdev_wait_for_examine" 00:09:33.207 } 00:09:33.207 ] 00:09:33.207 } 00:09:33.207 ] 00:09:33.207 } 00:09:33.207 [2024-07-25 08:51:40.256888] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:33.207 [2024-07-25 08:51:40.257090] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63544 ] 00:09:33.465 [2024-07-25 08:51:40.429169] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.722 [2024-07-25 08:51:40.704583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.980 [2024-07-25 08:51:40.908721] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:35.173  Copying: 48/48 [kB] (average 46 MBps) 00:09:35.173 00:09:35.173 08:51:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:35.173 08:51:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:09:35.173 08:51:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:09:35.173 08:51:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:09:35.173 08:51:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:09:35.173 08:51:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:09:35.173 08:51:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:09:35.173 08:51:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:09:35.173 08:51:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:09:35.173 08:51:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:09:35.173 08:51:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:35.173 { 00:09:35.173 "subsystems": [ 00:09:35.173 { 00:09:35.173 "subsystem": "bdev", 00:09:35.173 "config": [ 00:09:35.173 { 00:09:35.173 "params": { 00:09:35.173 "trtype": "pcie", 00:09:35.173 "traddr": "0000:00:10.0", 00:09:35.173 "name": "Nvme0" 00:09:35.173 }, 00:09:35.173 "method": "bdev_nvme_attach_controller" 00:09:35.173 }, 00:09:35.173 { 00:09:35.173 "method": "bdev_wait_for_examine" 00:09:35.173 } 00:09:35.173 ] 00:09:35.173 } 00:09:35.173 ] 00:09:35.173 } 00:09:35.173 [2024-07-25 08:51:42.204671] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:35.173 [2024-07-25 08:51:42.205125] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63576 ] 00:09:35.431 [2024-07-25 08:51:42.376000] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.688 [2024-07-25 08:51:42.620104] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.946 [2024-07-25 08:51:42.827707] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:37.319  Copying: 1024/1024 [kB] (average 500 MBps) 00:09:37.319 00:09:37.319 08:51:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:09:37.319 08:51:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:09:37.319 08:51:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:09:37.319 08:51:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:09:37.319 08:51:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:09:37.319 08:51:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:09:37.319 08:51:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:37.576 08:51:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:09:37.576 08:51:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:09:37.872 08:51:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:09:37.872 08:51:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:37.872 { 00:09:37.872 "subsystems": [ 00:09:37.872 { 00:09:37.872 "subsystem": "bdev", 00:09:37.872 "config": [ 00:09:37.872 { 00:09:37.872 "params": { 00:09:37.872 "trtype": "pcie", 00:09:37.872 "traddr": "0000:00:10.0", 00:09:37.872 "name": "Nvme0" 00:09:37.872 }, 00:09:37.872 "method": "bdev_nvme_attach_controller" 00:09:37.872 }, 00:09:37.872 { 00:09:37.872 "method": "bdev_wait_for_examine" 00:09:37.872 } 00:09:37.872 ] 00:09:37.872 } 00:09:37.872 ] 00:09:37.872 } 00:09:37.872 [2024-07-25 08:51:44.804028] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:37.872 [2024-07-25 08:51:44.804523] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63614 ] 00:09:37.872 [2024-07-25 08:51:44.980940] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.437 [2024-07-25 08:51:45.256567] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.437 [2024-07-25 08:51:45.486000] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:39.629  Copying: 48/48 [kB] (average 46 MBps) 00:09:39.629 00:09:39.629 08:51:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:09:39.629 08:51:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:09:39.629 08:51:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:09:39.629 08:51:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:39.888 { 00:09:39.888 "subsystems": [ 00:09:39.888 { 00:09:39.888 "subsystem": "bdev", 00:09:39.888 "config": [ 00:09:39.888 { 00:09:39.888 "params": { 00:09:39.888 "trtype": "pcie", 00:09:39.888 "traddr": "0000:00:10.0", 00:09:39.888 "name": "Nvme0" 00:09:39.888 }, 00:09:39.888 "method": "bdev_nvme_attach_controller" 00:09:39.888 }, 00:09:39.888 { 00:09:39.888 "method": "bdev_wait_for_examine" 00:09:39.888 } 00:09:39.888 ] 00:09:39.888 } 00:09:39.888 ] 00:09:39.888 } 00:09:39.888 [2024-07-25 08:51:46.834618] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:39.888 [2024-07-25 08:51:46.834895] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63644 ] 00:09:40.145 [2024-07-25 08:51:47.022651] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:40.404 [2024-07-25 08:51:47.300487] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.404 [2024-07-25 08:51:47.507577] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:42.037  Copying: 48/48 [kB] (average 46 MBps) 00:09:42.037 00:09:42.037 08:51:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:42.037 08:51:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:09:42.037 08:51:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:09:42.037 08:51:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:09:42.037 08:51:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:09:42.037 08:51:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:09:42.037 08:51:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:09:42.037 08:51:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:09:42.037 08:51:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:09:42.037 08:51:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:09:42.037 08:51:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:42.037 { 00:09:42.037 "subsystems": [ 00:09:42.037 { 00:09:42.037 "subsystem": "bdev", 00:09:42.037 "config": [ 00:09:42.037 { 00:09:42.037 "params": { 00:09:42.037 "trtype": "pcie", 00:09:42.037 "traddr": "0000:00:10.0", 00:09:42.037 "name": "Nvme0" 00:09:42.037 }, 00:09:42.037 "method": "bdev_nvme_attach_controller" 00:09:42.037 }, 00:09:42.037 { 00:09:42.037 "method": "bdev_wait_for_examine" 00:09:42.037 } 00:09:42.037 ] 00:09:42.037 } 00:09:42.037 ] 00:09:42.037 } 00:09:42.037 [2024-07-25 08:51:48.991740] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:42.037 [2024-07-25 08:51:48.991959] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63677 ] 00:09:42.308 [2024-07-25 08:51:49.166612] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.308 [2024-07-25 08:51:49.405792] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.587 [2024-07-25 08:51:49.616096] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:43.778  Copying: 1024/1024 [kB] (average 500 MBps) 00:09:43.778 00:09:43.778 ************************************ 00:09:43.778 END TEST dd_rw 00:09:43.778 ************************************ 00:09:43.778 00:09:43.778 real 0m40.433s 00:09:43.778 user 0m33.929s 00:09:43.778 sys 0m17.108s 00:09:43.778 08:51:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:43.778 08:51:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:43.778 08:51:50 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:09:43.778 08:51:50 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:43.778 08:51:50 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:43.778 08:51:50 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:09:43.778 ************************************ 00:09:43.778 START TEST dd_rw_offset 00:09:43.778 ************************************ 00:09:43.778 08:51:50 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1125 -- # basic_offset 00:09:43.778 08:51:50 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:09:43.778 08:51:50 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:09:43.778 08:51:50 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:09:43.778 08:51:50 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:09:44.037 08:51:50 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:09:44.037 08:51:50 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=htonlk6fp5yknw190nvy47d14o6l0rpf4fuc2ipz39a0stttcmgjwk6h6ktssu99ekq6mr2zq1hjjl0t8rhkx66c2sptb23kg0r1u6zy4po1r88zxwtwc3a413jf2ayg7i8sbqqdm0zfvvoyniuam7hwkykbq5dc5b7kmp89a0dl53uy7h425mujz5fa91vft73593qv3half6ueruzw15poam7pr3q4nb2ugbq6xbjr7hiyqr3a5zowaweezy6v4nqa0ynscacukptxbm0teiu3at3xei474b0h2rh2k1qx4wwxqqqsm4i6wrct6mddr1n8a7ivrscb05s11lx3vk2xwvxpplyqijsu7tctj3lftmo7r8p9tgf9hw4gwn7tthzntdqzpcznixj1dlrfo8vwcc9951i6tktbvmz8jafi725b2xwjgcs5akdtbvblj6v9w786dkvrhszv501qo3rre333w09yz5z3kwaqvl6333fpiv9v843cfo0y78awmxw584jxixc6ivshy4h6n1fnt9ft6x3r52u2f053c7y4k2zw5u8mucfxk7g0frjcjqhsofjmjbi46zkaux35sl9a290uog1yi8vg017drrptbll5h9i8hh18nafqgr6tbrqwe5fml3mj935mempav0v022s04640ey6qmk83m5k857bauy70kj1m1dlo8rxq9i9pj84e8srkpsaiwssa929vrs492ul8wvymtiu2mgcq3wplm6ucbjgroxyxshlqiek9dkv4dhsk5zs88djw5kmm3b2i9b4pf1c37j5bwa66ond1osxxd8f6jw1pztrotl0o8there6x7wh4wyzjsjocqlyul32kplosxdf2d78gi6odtw6pwk47kk45f2o9glyktrgx9egry0bmkk4zy9k4tlipp4o4shpadw947rhcmfnetsjeylztjhhmi740fbf6mwvey5nydxgiyxfhc2h8cefwya3haml92w5918w885c6iht7y03eihskv1wycdku6lxeixd5oltkuewjgw5wt6ul5xzyfl4iqoo9j770bsq57aqg54alk4ap016xr51ja9l6ifaypfbv1kw82o5lexpllw3gom6shkxyfqqqsvzqa1jomrr6tuu29u915q4xcfegn2th8r2w2cfrmbr2yj0tn0ois9usr42w7bicsou09p164ofrwepd6d9i8s52b477oqzidvy3p5tgvier2cgcr8dk1o0qavpkf8mvttyfj2pr7gzgj50kv4c68olw1unfer1w2vkplhzzl2ps7l8mg100ofejmgxmh1rb1r3aikwn4s9fmyvdphke3m76j5nd3njxifoxrwryjkp1rr2mtwffhw0enuae98753sf0a2lsz6032r9ygs7ufdq8xdtmelenptrswtw40et679lqwqbtp2iy62uagthmv16pf6avmx1wkmt5u6m5n09x0uoi0d2qib57u9lqjifp0p9slzkrkkrpvyibl9vcdowy7dt65zlrb5rvwycsucdfu307up2oed1drbq4ijk0wuc9vkl83nlm2u61bygbssykgxu7u262pkwh5sdaujejai9ki3i0vh5kts4iox8bl97hufn1qf1jowaulstgt3iwe56avwapxo62txll1fqeczxcrq8qo6aluqm4dmkl0am01xwpw5d8litvg3bewtxgjjhe1c5zsfpd1qlotbmv0e9sdgj5zs23w2z2ej5zu12mth3ij0pz4bxcuqv5mp0p98r7bak85r5u1csjf5jfdijlmyoeyhq2h2ux0lnn3va3e9fskfrqekhpe87xh4g2vxicaj3ykiscaemx5z0vbi3v6xldrk4v1t0i4sdxrhp5gufz2n4n9pqlae30qhq1dn5w3b7xa7g5rnj7cbjw034nn1a0khjvcdua4mr648rgb2jtqf4ylyb6bt3t3u434ls3gv2mgu1vxwqp0hn46r6ub6s35ddaymbu2dj6camy84shssnpu0k34iel43h8ia9ywxofsplpbinba2sdud95ipyv3d255h3uq5ixhbj360vcnhfpd3fpmxe4p128mvtckhq0ocwua7nu0ynbik5bysw7spnjxkqmc524u1o06ue4xbq54mibj79zyhpo9rs5dfww3zcb9rgpm5ece26b08nninajk1q8sx4i3uj2mnpj1o7bp0atanyddw4sqz1rpjpgbb0d4gskmope6ur9y1rb7h8qj89djlu6lapem0udnwinups013w11fhrddyka6hok15nuufc4pjq7q3bjxghii8dv25j9ouwyi1mcg34azi4ng70wzp2mb6yfti564k1epdaakg14tjaoe74iqob4adh4m5b8zya8ajfl85gvzuoqxnryor2qhksjg06jj169j7am1tetofitky8nond1vt6fiyt5bzco215ahy7p5y868dl49ky4p67ts7dnxcunsiw7ewh546qpq5h96acj5crf68heth95po8xw0x17xpjbmxp1p70hvlztq2vl875sjftly51i0dg47gagngep3dqcpzn3bo4zwrqxcqsss2fpxirshmzhs8ujp19dhk95zzh9prn29zm417m37towpsrsrghtnktxlbdydjy08rdr67bfrng4m2g4hb5lbyd8op09xjfvloduo5gzq2vg0eb0h4sch7ms0p1wwj07tcp1bp6xso3txxgyvzbbmrcc7dvinmomwqfs6srmuyiv5i7utz75snt8q0rmy4n8lg7hugtrto9xpflj1ygnyr10s1jy42rwu1mgipaka2rmx99hiibkg3epxr0ffq6e9bcxxi0rayipblmlzkq400imrmfzy0sas4jihn8jykhvownv7j4ekdasjliu72zflqoalsivny74t8gxtn0t4vadsyo0y0fk6gmcp6e9aq0s8ckfxhcdyrp0w3t4zeoui03whreg062w2q7g211fbyw0opnwh0nrz43td536cwfpre1m2qi25iimkqffouh8bqb3bc9k256yrcs0dfe7glfo9chl5wg2bzcce1tzwkb7rhuzsc371ubl7300sardfczfjdo0e4bmb71l4nxii6mcnwnzehqb3hre0r2hnwnfh5ccx2n8oymg1n8xang3tyeo935jgu8w15sdr7eaj6z4dqwff348sn1i8ub1b1ph1ww3vl5fgr522qqe893xg9qjr75i02apcsfcxzqf2h7q9nob4lapqnsjzdr3s2j2udws1hv5ro4g3i8wvcvazbo8ij9xcw52tfctnlt241glkwhhxnm3cxcz7jf3odj1667zr1321ybv6eckh0ocgybm4odc1mz1uqugvfex92mgnog76cg763lvhah1dje0ufondccygxqymmknf32iy53o29ts5bjiegc8jdt8y9c2j4sqlm772zcia07p1anw3vhhbptrcgc0x9axx6xaf5tba96oodhyszvwhb9zdlr5p6ns62h9wdpjep0tqyr2dkm45qf7o6i5i5h4cog0ne9m7rw9ut7a6phxuub8p8yg0bl6a1x89lzhy14xm8na509uiv9kv3krkypyzgj6vhai2jy3b1j6p4qjjam624unl9uu2hibyxk9v8p7ly6ovs8znj1jay1gcoj8vrpmq4mjxp3stl9t00ma2aabf7jygdxn6avha95ct4d16iv92y6oajqdnqti6579fkfpgijohu39jfrj79iy026r07d4y7srb3dzkbq2ksb1giycqbe22llq2w99ia8hiux9y15h95py5tw246yqqwg4j1ihaj1ae7c3inq12gsyfiu2b3ik2sctmjo9tmlduydf7oi0bbxg3uwyzd2nxw10j89nrufas4o4bgig84jzdgg0xa0y0b6sh5xhwp96k3d6mytttvzth76q6ky6ifxskkwastcmfgu9do5jdkok3ilbswwg16swonq1moxjgoowcj5jxil8ost0yvjvzivke274nhm3vvlusa27mj5iwv24wxq6r8dn9wxcuzbd0pyfflhxqo6j3c8f2gxothybehcwh6jafk 00:09:44.037 08:51:50 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:09:44.037 08:51:50 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:09:44.037 08:51:50 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:09:44.037 08:51:50 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:09:44.037 { 00:09:44.037 "subsystems": [ 00:09:44.037 { 00:09:44.037 "subsystem": "bdev", 00:09:44.037 "config": [ 00:09:44.037 { 00:09:44.037 "params": { 00:09:44.037 "trtype": "pcie", 00:09:44.037 "traddr": "0000:00:10.0", 00:09:44.037 "name": "Nvme0" 00:09:44.037 }, 00:09:44.037 "method": "bdev_nvme_attach_controller" 00:09:44.037 }, 00:09:44.037 { 00:09:44.037 "method": "bdev_wait_for_examine" 00:09:44.037 } 00:09:44.037 ] 00:09:44.037 } 00:09:44.037 ] 00:09:44.037 } 00:09:44.037 [2024-07-25 08:51:51.015272] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:44.037 [2024-07-25 08:51:51.015458] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63725 ] 00:09:44.296 [2024-07-25 08:51:51.175321] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.554 [2024-07-25 08:51:51.414989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.554 [2024-07-25 08:51:51.621161] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:46.186  Copying: 4096/4096 [B] (average 4000 kBps) 00:09:46.186 00:09:46.186 08:51:52 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:09:46.186 08:51:52 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:09:46.186 08:51:53 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:09:46.186 08:51:53 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:09:46.186 { 00:09:46.186 "subsystems": [ 00:09:46.186 { 00:09:46.186 "subsystem": "bdev", 00:09:46.186 "config": [ 00:09:46.186 { 00:09:46.186 "params": { 00:09:46.186 "trtype": "pcie", 00:09:46.186 "traddr": "0000:00:10.0", 00:09:46.186 "name": "Nvme0" 00:09:46.186 }, 00:09:46.186 "method": "bdev_nvme_attach_controller" 00:09:46.186 }, 00:09:46.186 { 00:09:46.186 "method": "bdev_wait_for_examine" 00:09:46.186 } 00:09:46.186 ] 00:09:46.186 } 00:09:46.186 ] 00:09:46.186 } 00:09:46.186 [2024-07-25 08:51:53.120295] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:46.186 [2024-07-25 08:51:53.120526] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63756 ] 00:09:46.186 [2024-07-25 08:51:53.299570] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.795 [2024-07-25 08:51:53.585399] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.795 [2024-07-25 08:51:53.790285] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:47.997  Copying: 4096/4096 [B] (average 4000 kBps) 00:09:47.997 00:09:47.997 08:51:54 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:09:47.998 08:51:54 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ htonlk6fp5yknw190nvy47d14o6l0rpf4fuc2ipz39a0stttcmgjwk6h6ktssu99ekq6mr2zq1hjjl0t8rhkx66c2sptb23kg0r1u6zy4po1r88zxwtwc3a413jf2ayg7i8sbqqdm0zfvvoyniuam7hwkykbq5dc5b7kmp89a0dl53uy7h425mujz5fa91vft73593qv3half6ueruzw15poam7pr3q4nb2ugbq6xbjr7hiyqr3a5zowaweezy6v4nqa0ynscacukptxbm0teiu3at3xei474b0h2rh2k1qx4wwxqqqsm4i6wrct6mddr1n8a7ivrscb05s11lx3vk2xwvxpplyqijsu7tctj3lftmo7r8p9tgf9hw4gwn7tthzntdqzpcznixj1dlrfo8vwcc9951i6tktbvmz8jafi725b2xwjgcs5akdtbvblj6v9w786dkvrhszv501qo3rre333w09yz5z3kwaqvl6333fpiv9v843cfo0y78awmxw584jxixc6ivshy4h6n1fnt9ft6x3r52u2f053c7y4k2zw5u8mucfxk7g0frjcjqhsofjmjbi46zkaux35sl9a290uog1yi8vg017drrptbll5h9i8hh18nafqgr6tbrqwe5fml3mj935mempav0v022s04640ey6qmk83m5k857bauy70kj1m1dlo8rxq9i9pj84e8srkpsaiwssa929vrs492ul8wvymtiu2mgcq3wplm6ucbjgroxyxshlqiek9dkv4dhsk5zs88djw5kmm3b2i9b4pf1c37j5bwa66ond1osxxd8f6jw1pztrotl0o8there6x7wh4wyzjsjocqlyul32kplosxdf2d78gi6odtw6pwk47kk45f2o9glyktrgx9egry0bmkk4zy9k4tlipp4o4shpadw947rhcmfnetsjeylztjhhmi740fbf6mwvey5nydxgiyxfhc2h8cefwya3haml92w5918w885c6iht7y03eihskv1wycdku6lxeixd5oltkuewjgw5wt6ul5xzyfl4iqoo9j770bsq57aqg54alk4ap016xr51ja9l6ifaypfbv1kw82o5lexpllw3gom6shkxyfqqqsvzqa1jomrr6tuu29u915q4xcfegn2th8r2w2cfrmbr2yj0tn0ois9usr42w7bicsou09p164ofrwepd6d9i8s52b477oqzidvy3p5tgvier2cgcr8dk1o0qavpkf8mvttyfj2pr7gzgj50kv4c68olw1unfer1w2vkplhzzl2ps7l8mg100ofejmgxmh1rb1r3aikwn4s9fmyvdphke3m76j5nd3njxifoxrwryjkp1rr2mtwffhw0enuae98753sf0a2lsz6032r9ygs7ufdq8xdtmelenptrswtw40et679lqwqbtp2iy62uagthmv16pf6avmx1wkmt5u6m5n09x0uoi0d2qib57u9lqjifp0p9slzkrkkrpvyibl9vcdowy7dt65zlrb5rvwycsucdfu307up2oed1drbq4ijk0wuc9vkl83nlm2u61bygbssykgxu7u262pkwh5sdaujejai9ki3i0vh5kts4iox8bl97hufn1qf1jowaulstgt3iwe56avwapxo62txll1fqeczxcrq8qo6aluqm4dmkl0am01xwpw5d8litvg3bewtxgjjhe1c5zsfpd1qlotbmv0e9sdgj5zs23w2z2ej5zu12mth3ij0pz4bxcuqv5mp0p98r7bak85r5u1csjf5jfdijlmyoeyhq2h2ux0lnn3va3e9fskfrqekhpe87xh4g2vxicaj3ykiscaemx5z0vbi3v6xldrk4v1t0i4sdxrhp5gufz2n4n9pqlae30qhq1dn5w3b7xa7g5rnj7cbjw034nn1a0khjvcdua4mr648rgb2jtqf4ylyb6bt3t3u434ls3gv2mgu1vxwqp0hn46r6ub6s35ddaymbu2dj6camy84shssnpu0k34iel43h8ia9ywxofsplpbinba2sdud95ipyv3d255h3uq5ixhbj360vcnhfpd3fpmxe4p128mvtckhq0ocwua7nu0ynbik5bysw7spnjxkqmc524u1o06ue4xbq54mibj79zyhpo9rs5dfww3zcb9rgpm5ece26b08nninajk1q8sx4i3uj2mnpj1o7bp0atanyddw4sqz1rpjpgbb0d4gskmope6ur9y1rb7h8qj89djlu6lapem0udnwinups013w11fhrddyka6hok15nuufc4pjq7q3bjxghii8dv25j9ouwyi1mcg34azi4ng70wzp2mb6yfti564k1epdaakg14tjaoe74iqob4adh4m5b8zya8ajfl85gvzuoqxnryor2qhksjg06jj169j7am1tetofitky8nond1vt6fiyt5bzco215ahy7p5y868dl49ky4p67ts7dnxcunsiw7ewh546qpq5h96acj5crf68heth95po8xw0x17xpjbmxp1p70hvlztq2vl875sjftly51i0dg47gagngep3dqcpzn3bo4zwrqxcqsss2fpxirshmzhs8ujp19dhk95zzh9prn29zm417m37towpsrsrghtnktxlbdydjy08rdr67bfrng4m2g4hb5lbyd8op09xjfvloduo5gzq2vg0eb0h4sch7ms0p1wwj07tcp1bp6xso3txxgyvzbbmrcc7dvinmomwqfs6srmuyiv5i7utz75snt8q0rmy4n8lg7hugtrto9xpflj1ygnyr10s1jy42rwu1mgipaka2rmx99hiibkg3epxr0ffq6e9bcxxi0rayipblmlzkq400imrmfzy0sas4jihn8jykhvownv7j4ekdasjliu72zflqoalsivny74t8gxtn0t4vadsyo0y0fk6gmcp6e9aq0s8ckfxhcdyrp0w3t4zeoui03whreg062w2q7g211fbyw0opnwh0nrz43td536cwfpre1m2qi25iimkqffouh8bqb3bc9k256yrcs0dfe7glfo9chl5wg2bzcce1tzwkb7rhuzsc371ubl7300sardfczfjdo0e4bmb71l4nxii6mcnwnzehqb3hre0r2hnwnfh5ccx2n8oymg1n8xang3tyeo935jgu8w15sdr7eaj6z4dqwff348sn1i8ub1b1ph1ww3vl5fgr522qqe893xg9qjr75i02apcsfcxzqf2h7q9nob4lapqnsjzdr3s2j2udws1hv5ro4g3i8wvcvazbo8ij9xcw52tfctnlt241glkwhhxnm3cxcz7jf3odj1667zr1321ybv6eckh0ocgybm4odc1mz1uqugvfex92mgnog76cg763lvhah1dje0ufondccygxqymmknf32iy53o29ts5bjiegc8jdt8y9c2j4sqlm772zcia07p1anw3vhhbptrcgc0x9axx6xaf5tba96oodhyszvwhb9zdlr5p6ns62h9wdpjep0tqyr2dkm45qf7o6i5i5h4cog0ne9m7rw9ut7a6phxuub8p8yg0bl6a1x89lzhy14xm8na509uiv9kv3krkypyzgj6vhai2jy3b1j6p4qjjam624unl9uu2hibyxk9v8p7ly6ovs8znj1jay1gcoj8vrpmq4mjxp3stl9t00ma2aabf7jygdxn6avha95ct4d16iv92y6oajqdnqti6579fkfpgijohu39jfrj79iy026r07d4y7srb3dzkbq2ksb1giycqbe22llq2w99ia8hiux9y15h95py5tw246yqqwg4j1ihaj1ae7c3inq12gsyfiu2b3ik2sctmjo9tmlduydf7oi0bbxg3uwyzd2nxw10j89nrufas4o4bgig84jzdgg0xa0y0b6sh5xhwp96k3d6mytttvzth76q6ky6ifxskkwastcmfgu9do5jdkok3ilbswwg16swonq1moxjgoowcj5jxil8ost0yvjvzivke274nhm3vvlusa27mj5iwv24wxq6r8dn9wxcuzbd0pyfflhxqo6j3c8f2gxothybehcwh6jafk == \h\t\o\n\l\k\6\f\p\5\y\k\n\w\1\9\0\n\v\y\4\7\d\1\4\o\6\l\0\r\p\f\4\f\u\c\2\i\p\z\3\9\a\0\s\t\t\t\c\m\g\j\w\k\6\h\6\k\t\s\s\u\9\9\e\k\q\6\m\r\2\z\q\1\h\j\j\l\0\t\8\r\h\k\x\6\6\c\2\s\p\t\b\2\3\k\g\0\r\1\u\6\z\y\4\p\o\1\r\8\8\z\x\w\t\w\c\3\a\4\1\3\j\f\2\a\y\g\7\i\8\s\b\q\q\d\m\0\z\f\v\v\o\y\n\i\u\a\m\7\h\w\k\y\k\b\q\5\d\c\5\b\7\k\m\p\8\9\a\0\d\l\5\3\u\y\7\h\4\2\5\m\u\j\z\5\f\a\9\1\v\f\t\7\3\5\9\3\q\v\3\h\a\l\f\6\u\e\r\u\z\w\1\5\p\o\a\m\7\p\r\3\q\4\n\b\2\u\g\b\q\6\x\b\j\r\7\h\i\y\q\r\3\a\5\z\o\w\a\w\e\e\z\y\6\v\4\n\q\a\0\y\n\s\c\a\c\u\k\p\t\x\b\m\0\t\e\i\u\3\a\t\3\x\e\i\4\7\4\b\0\h\2\r\h\2\k\1\q\x\4\w\w\x\q\q\q\s\m\4\i\6\w\r\c\t\6\m\d\d\r\1\n\8\a\7\i\v\r\s\c\b\0\5\s\1\1\l\x\3\v\k\2\x\w\v\x\p\p\l\y\q\i\j\s\u\7\t\c\t\j\3\l\f\t\m\o\7\r\8\p\9\t\g\f\9\h\w\4\g\w\n\7\t\t\h\z\n\t\d\q\z\p\c\z\n\i\x\j\1\d\l\r\f\o\8\v\w\c\c\9\9\5\1\i\6\t\k\t\b\v\m\z\8\j\a\f\i\7\2\5\b\2\x\w\j\g\c\s\5\a\k\d\t\b\v\b\l\j\6\v\9\w\7\8\6\d\k\v\r\h\s\z\v\5\0\1\q\o\3\r\r\e\3\3\3\w\0\9\y\z\5\z\3\k\w\a\q\v\l\6\3\3\3\f\p\i\v\9\v\8\4\3\c\f\o\0\y\7\8\a\w\m\x\w\5\8\4\j\x\i\x\c\6\i\v\s\h\y\4\h\6\n\1\f\n\t\9\f\t\6\x\3\r\5\2\u\2\f\0\5\3\c\7\y\4\k\2\z\w\5\u\8\m\u\c\f\x\k\7\g\0\f\r\j\c\j\q\h\s\o\f\j\m\j\b\i\4\6\z\k\a\u\x\3\5\s\l\9\a\2\9\0\u\o\g\1\y\i\8\v\g\0\1\7\d\r\r\p\t\b\l\l\5\h\9\i\8\h\h\1\8\n\a\f\q\g\r\6\t\b\r\q\w\e\5\f\m\l\3\m\j\9\3\5\m\e\m\p\a\v\0\v\0\2\2\s\0\4\6\4\0\e\y\6\q\m\k\8\3\m\5\k\8\5\7\b\a\u\y\7\0\k\j\1\m\1\d\l\o\8\r\x\q\9\i\9\p\j\8\4\e\8\s\r\k\p\s\a\i\w\s\s\a\9\2\9\v\r\s\4\9\2\u\l\8\w\v\y\m\t\i\u\2\m\g\c\q\3\w\p\l\m\6\u\c\b\j\g\r\o\x\y\x\s\h\l\q\i\e\k\9\d\k\v\4\d\h\s\k\5\z\s\8\8\d\j\w\5\k\m\m\3\b\2\i\9\b\4\p\f\1\c\3\7\j\5\b\w\a\6\6\o\n\d\1\o\s\x\x\d\8\f\6\j\w\1\p\z\t\r\o\t\l\0\o\8\t\h\e\r\e\6\x\7\w\h\4\w\y\z\j\s\j\o\c\q\l\y\u\l\3\2\k\p\l\o\s\x\d\f\2\d\7\8\g\i\6\o\d\t\w\6\p\w\k\4\7\k\k\4\5\f\2\o\9\g\l\y\k\t\r\g\x\9\e\g\r\y\0\b\m\k\k\4\z\y\9\k\4\t\l\i\p\p\4\o\4\s\h\p\a\d\w\9\4\7\r\h\c\m\f\n\e\t\s\j\e\y\l\z\t\j\h\h\m\i\7\4\0\f\b\f\6\m\w\v\e\y\5\n\y\d\x\g\i\y\x\f\h\c\2\h\8\c\e\f\w\y\a\3\h\a\m\l\9\2\w\5\9\1\8\w\8\8\5\c\6\i\h\t\7\y\0\3\e\i\h\s\k\v\1\w\y\c\d\k\u\6\l\x\e\i\x\d\5\o\l\t\k\u\e\w\j\g\w\5\w\t\6\u\l\5\x\z\y\f\l\4\i\q\o\o\9\j\7\7\0\b\s\q\5\7\a\q\g\5\4\a\l\k\4\a\p\0\1\6\x\r\5\1\j\a\9\l\6\i\f\a\y\p\f\b\v\1\k\w\8\2\o\5\l\e\x\p\l\l\w\3\g\o\m\6\s\h\k\x\y\f\q\q\q\s\v\z\q\a\1\j\o\m\r\r\6\t\u\u\2\9\u\9\1\5\q\4\x\c\f\e\g\n\2\t\h\8\r\2\w\2\c\f\r\m\b\r\2\y\j\0\t\n\0\o\i\s\9\u\s\r\4\2\w\7\b\i\c\s\o\u\0\9\p\1\6\4\o\f\r\w\e\p\d\6\d\9\i\8\s\5\2\b\4\7\7\o\q\z\i\d\v\y\3\p\5\t\g\v\i\e\r\2\c\g\c\r\8\d\k\1\o\0\q\a\v\p\k\f\8\m\v\t\t\y\f\j\2\p\r\7\g\z\g\j\5\0\k\v\4\c\6\8\o\l\w\1\u\n\f\e\r\1\w\2\v\k\p\l\h\z\z\l\2\p\s\7\l\8\m\g\1\0\0\o\f\e\j\m\g\x\m\h\1\r\b\1\r\3\a\i\k\w\n\4\s\9\f\m\y\v\d\p\h\k\e\3\m\7\6\j\5\n\d\3\n\j\x\i\f\o\x\r\w\r\y\j\k\p\1\r\r\2\m\t\w\f\f\h\w\0\e\n\u\a\e\9\8\7\5\3\s\f\0\a\2\l\s\z\6\0\3\2\r\9\y\g\s\7\u\f\d\q\8\x\d\t\m\e\l\e\n\p\t\r\s\w\t\w\4\0\e\t\6\7\9\l\q\w\q\b\t\p\2\i\y\6\2\u\a\g\t\h\m\v\1\6\p\f\6\a\v\m\x\1\w\k\m\t\5\u\6\m\5\n\0\9\x\0\u\o\i\0\d\2\q\i\b\5\7\u\9\l\q\j\i\f\p\0\p\9\s\l\z\k\r\k\k\r\p\v\y\i\b\l\9\v\c\d\o\w\y\7\d\t\6\5\z\l\r\b\5\r\v\w\y\c\s\u\c\d\f\u\3\0\7\u\p\2\o\e\d\1\d\r\b\q\4\i\j\k\0\w\u\c\9\v\k\l\8\3\n\l\m\2\u\6\1\b\y\g\b\s\s\y\k\g\x\u\7\u\2\6\2\p\k\w\h\5\s\d\a\u\j\e\j\a\i\9\k\i\3\i\0\v\h\5\k\t\s\4\i\o\x\8\b\l\9\7\h\u\f\n\1\q\f\1\j\o\w\a\u\l\s\t\g\t\3\i\w\e\5\6\a\v\w\a\p\x\o\6\2\t\x\l\l\1\f\q\e\c\z\x\c\r\q\8\q\o\6\a\l\u\q\m\4\d\m\k\l\0\a\m\0\1\x\w\p\w\5\d\8\l\i\t\v\g\3\b\e\w\t\x\g\j\j\h\e\1\c\5\z\s\f\p\d\1\q\l\o\t\b\m\v\0\e\9\s\d\g\j\5\z\s\2\3\w\2\z\2\e\j\5\z\u\1\2\m\t\h\3\i\j\0\p\z\4\b\x\c\u\q\v\5\m\p\0\p\9\8\r\7\b\a\k\8\5\r\5\u\1\c\s\j\f\5\j\f\d\i\j\l\m\y\o\e\y\h\q\2\h\2\u\x\0\l\n\n\3\v\a\3\e\9\f\s\k\f\r\q\e\k\h\p\e\8\7\x\h\4\g\2\v\x\i\c\a\j\3\y\k\i\s\c\a\e\m\x\5\z\0\v\b\i\3\v\6\x\l\d\r\k\4\v\1\t\0\i\4\s\d\x\r\h\p\5\g\u\f\z\2\n\4\n\9\p\q\l\a\e\3\0\q\h\q\1\d\n\5\w\3\b\7\x\a\7\g\5\r\n\j\7\c\b\j\w\0\3\4\n\n\1\a\0\k\h\j\v\c\d\u\a\4\m\r\6\4\8\r\g\b\2\j\t\q\f\4\y\l\y\b\6\b\t\3\t\3\u\4\3\4\l\s\3\g\v\2\m\g\u\1\v\x\w\q\p\0\h\n\4\6\r\6\u\b\6\s\3\5\d\d\a\y\m\b\u\2\d\j\6\c\a\m\y\8\4\s\h\s\s\n\p\u\0\k\3\4\i\e\l\4\3\h\8\i\a\9\y\w\x\o\f\s\p\l\p\b\i\n\b\a\2\s\d\u\d\9\5\i\p\y\v\3\d\2\5\5\h\3\u\q\5\i\x\h\b\j\3\6\0\v\c\n\h\f\p\d\3\f\p\m\x\e\4\p\1\2\8\m\v\t\c\k\h\q\0\o\c\w\u\a\7\n\u\0\y\n\b\i\k\5\b\y\s\w\7\s\p\n\j\x\k\q\m\c\5\2\4\u\1\o\0\6\u\e\4\x\b\q\5\4\m\i\b\j\7\9\z\y\h\p\o\9\r\s\5\d\f\w\w\3\z\c\b\9\r\g\p\m\5\e\c\e\2\6\b\0\8\n\n\i\n\a\j\k\1\q\8\s\x\4\i\3\u\j\2\m\n\p\j\1\o\7\b\p\0\a\t\a\n\y\d\d\w\4\s\q\z\1\r\p\j\p\g\b\b\0\d\4\g\s\k\m\o\p\e\6\u\r\9\y\1\r\b\7\h\8\q\j\8\9\d\j\l\u\6\l\a\p\e\m\0\u\d\n\w\i\n\u\p\s\0\1\3\w\1\1\f\h\r\d\d\y\k\a\6\h\o\k\1\5\n\u\u\f\c\4\p\j\q\7\q\3\b\j\x\g\h\i\i\8\d\v\2\5\j\9\o\u\w\y\i\1\m\c\g\3\4\a\z\i\4\n\g\7\0\w\z\p\2\m\b\6\y\f\t\i\5\6\4\k\1\e\p\d\a\a\k\g\1\4\t\j\a\o\e\7\4\i\q\o\b\4\a\d\h\4\m\5\b\8\z\y\a\8\a\j\f\l\8\5\g\v\z\u\o\q\x\n\r\y\o\r\2\q\h\k\s\j\g\0\6\j\j\1\6\9\j\7\a\m\1\t\e\t\o\f\i\t\k\y\8\n\o\n\d\1\v\t\6\f\i\y\t\5\b\z\c\o\2\1\5\a\h\y\7\p\5\y\8\6\8\d\l\4\9\k\y\4\p\6\7\t\s\7\d\n\x\c\u\n\s\i\w\7\e\w\h\5\4\6\q\p\q\5\h\9\6\a\c\j\5\c\r\f\6\8\h\e\t\h\9\5\p\o\8\x\w\0\x\1\7\x\p\j\b\m\x\p\1\p\7\0\h\v\l\z\t\q\2\v\l\8\7\5\s\j\f\t\l\y\5\1\i\0\d\g\4\7\g\a\g\n\g\e\p\3\d\q\c\p\z\n\3\b\o\4\z\w\r\q\x\c\q\s\s\s\2\f\p\x\i\r\s\h\m\z\h\s\8\u\j\p\1\9\d\h\k\9\5\z\z\h\9\p\r\n\2\9\z\m\4\1\7\m\3\7\t\o\w\p\s\r\s\r\g\h\t\n\k\t\x\l\b\d\y\d\j\y\0\8\r\d\r\6\7\b\f\r\n\g\4\m\2\g\4\h\b\5\l\b\y\d\8\o\p\0\9\x\j\f\v\l\o\d\u\o\5\g\z\q\2\v\g\0\e\b\0\h\4\s\c\h\7\m\s\0\p\1\w\w\j\0\7\t\c\p\1\b\p\6\x\s\o\3\t\x\x\g\y\v\z\b\b\m\r\c\c\7\d\v\i\n\m\o\m\w\q\f\s\6\s\r\m\u\y\i\v\5\i\7\u\t\z\7\5\s\n\t\8\q\0\r\m\y\4\n\8\l\g\7\h\u\g\t\r\t\o\9\x\p\f\l\j\1\y\g\n\y\r\1\0\s\1\j\y\4\2\r\w\u\1\m\g\i\p\a\k\a\2\r\m\x\9\9\h\i\i\b\k\g\3\e\p\x\r\0\f\f\q\6\e\9\b\c\x\x\i\0\r\a\y\i\p\b\l\m\l\z\k\q\4\0\0\i\m\r\m\f\z\y\0\s\a\s\4\j\i\h\n\8\j\y\k\h\v\o\w\n\v\7\j\4\e\k\d\a\s\j\l\i\u\7\2\z\f\l\q\o\a\l\s\i\v\n\y\7\4\t\8\g\x\t\n\0\t\4\v\a\d\s\y\o\0\y\0\f\k\6\g\m\c\p\6\e\9\a\q\0\s\8\c\k\f\x\h\c\d\y\r\p\0\w\3\t\4\z\e\o\u\i\0\3\w\h\r\e\g\0\6\2\w\2\q\7\g\2\1\1\f\b\y\w\0\o\p\n\w\h\0\n\r\z\4\3\t\d\5\3\6\c\w\f\p\r\e\1\m\2\q\i\2\5\i\i\m\k\q\f\f\o\u\h\8\b\q\b\3\b\c\9\k\2\5\6\y\r\c\s\0\d\f\e\7\g\l\f\o\9\c\h\l\5\w\g\2\b\z\c\c\e\1\t\z\w\k\b\7\r\h\u\z\s\c\3\7\1\u\b\l\7\3\0\0\s\a\r\d\f\c\z\f\j\d\o\0\e\4\b\m\b\7\1\l\4\n\x\i\i\6\m\c\n\w\n\z\e\h\q\b\3\h\r\e\0\r\2\h\n\w\n\f\h\5\c\c\x\2\n\8\o\y\m\g\1\n\8\x\a\n\g\3\t\y\e\o\9\3\5\j\g\u\8\w\1\5\s\d\r\7\e\a\j\6\z\4\d\q\w\f\f\3\4\8\s\n\1\i\8\u\b\1\b\1\p\h\1\w\w\3\v\l\5\f\g\r\5\2\2\q\q\e\8\9\3\x\g\9\q\j\r\7\5\i\0\2\a\p\c\s\f\c\x\z\q\f\2\h\7\q\9\n\o\b\4\l\a\p\q\n\s\j\z\d\r\3\s\2\j\2\u\d\w\s\1\h\v\5\r\o\4\g\3\i\8\w\v\c\v\a\z\b\o\8\i\j\9\x\c\w\5\2\t\f\c\t\n\l\t\2\4\1\g\l\k\w\h\h\x\n\m\3\c\x\c\z\7\j\f\3\o\d\j\1\6\6\7\z\r\1\3\2\1\y\b\v\6\e\c\k\h\0\o\c\g\y\b\m\4\o\d\c\1\m\z\1\u\q\u\g\v\f\e\x\9\2\m\g\n\o\g\7\6\c\g\7\6\3\l\v\h\a\h\1\d\j\e\0\u\f\o\n\d\c\c\y\g\x\q\y\m\m\k\n\f\3\2\i\y\5\3\o\2\9\t\s\5\b\j\i\e\g\c\8\j\d\t\8\y\9\c\2\j\4\s\q\l\m\7\7\2\z\c\i\a\0\7\p\1\a\n\w\3\v\h\h\b\p\t\r\c\g\c\0\x\9\a\x\x\6\x\a\f\5\t\b\a\9\6\o\o\d\h\y\s\z\v\w\h\b\9\z\d\l\r\5\p\6\n\s\6\2\h\9\w\d\p\j\e\p\0\t\q\y\r\2\d\k\m\4\5\q\f\7\o\6\i\5\i\5\h\4\c\o\g\0\n\e\9\m\7\r\w\9\u\t\7\a\6\p\h\x\u\u\b\8\p\8\y\g\0\b\l\6\a\1\x\8\9\l\z\h\y\1\4\x\m\8\n\a\5\0\9\u\i\v\9\k\v\3\k\r\k\y\p\y\z\g\j\6\v\h\a\i\2\j\y\3\b\1\j\6\p\4\q\j\j\a\m\6\2\4\u\n\l\9\u\u\2\h\i\b\y\x\k\9\v\8\p\7\l\y\6\o\v\s\8\z\n\j\1\j\a\y\1\g\c\o\j\8\v\r\p\m\q\4\m\j\x\p\3\s\t\l\9\t\0\0\m\a\2\a\a\b\f\7\j\y\g\d\x\n\6\a\v\h\a\9\5\c\t\4\d\1\6\i\v\9\2\y\6\o\a\j\q\d\n\q\t\i\6\5\7\9\f\k\f\p\g\i\j\o\h\u\3\9\j\f\r\j\7\9\i\y\0\2\6\r\0\7\d\4\y\7\s\r\b\3\d\z\k\b\q\2\k\s\b\1\g\i\y\c\q\b\e\2\2\l\l\q\2\w\9\9\i\a\8\h\i\u\x\9\y\1\5\h\9\5\p\y\5\t\w\2\4\6\y\q\q\w\g\4\j\1\i\h\a\j\1\a\e\7\c\3\i\n\q\1\2\g\s\y\f\i\u\2\b\3\i\k\2\s\c\t\m\j\o\9\t\m\l\d\u\y\d\f\7\o\i\0\b\b\x\g\3\u\w\y\z\d\2\n\x\w\1\0\j\8\9\n\r\u\f\a\s\4\o\4\b\g\i\g\8\4\j\z\d\g\g\0\x\a\0\y\0\b\6\s\h\5\x\h\w\p\9\6\k\3\d\6\m\y\t\t\t\v\z\t\h\7\6\q\6\k\y\6\i\f\x\s\k\k\w\a\s\t\c\m\f\g\u\9\d\o\5\j\d\k\o\k\3\i\l\b\s\w\w\g\1\6\s\w\o\n\q\1\m\o\x\j\g\o\o\w\c\j\5\j\x\i\l\8\o\s\t\0\y\v\j\v\z\i\v\k\e\2\7\4\n\h\m\3\v\v\l\u\s\a\2\7\m\j\5\i\w\v\2\4\w\x\q\6\r\8\d\n\9\w\x\c\u\z\b\d\0\p\y\f\f\l\h\x\q\o\6\j\3\c\8\f\2\g\x\o\t\h\y\b\e\h\c\w\h\6\j\a\f\k ]] 00:09:47.998 00:09:47.998 real 0m4.084s 00:09:47.998 user 0m3.375s 00:09:47.998 sys 0m1.867s 00:09:47.998 08:51:54 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:47.998 08:51:54 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:09:47.998 ************************************ 00:09:47.998 END TEST dd_rw_offset 00:09:47.998 ************************************ 00:09:47.998 08:51:55 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:09:47.998 08:51:55 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:09:47.998 08:51:55 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:09:47.998 08:51:55 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:09:47.998 08:51:55 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:09:47.998 08:51:55 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:09:47.998 08:51:55 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:09:47.998 08:51:55 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:09:47.998 08:51:55 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:09:47.998 08:51:55 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:09:47.998 08:51:55 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:09:48.256 [2024-07-25 08:51:55.117194] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:48.256 [2024-07-25 08:51:55.117423] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63803 ] 00:09:48.256 { 00:09:48.256 "subsystems": [ 00:09:48.256 { 00:09:48.256 "subsystem": "bdev", 00:09:48.256 "config": [ 00:09:48.256 { 00:09:48.256 "params": { 00:09:48.256 "trtype": "pcie", 00:09:48.256 "traddr": "0000:00:10.0", 00:09:48.256 "name": "Nvme0" 00:09:48.256 }, 00:09:48.256 "method": "bdev_nvme_attach_controller" 00:09:48.256 }, 00:09:48.256 { 00:09:48.256 "method": "bdev_wait_for_examine" 00:09:48.256 } 00:09:48.256 ] 00:09:48.256 } 00:09:48.256 ] 00:09:48.256 } 00:09:48.256 [2024-07-25 08:51:55.289097] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:48.514 [2024-07-25 08:51:55.532392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.773 [2024-07-25 08:51:55.740409] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:50.407  Copying: 1024/1024 [kB] (average 500 MBps) 00:09:50.407 00:09:50.407 08:51:57 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:50.407 ************************************ 00:09:50.407 END TEST spdk_dd_basic_rw 00:09:50.407 ************************************ 00:09:50.407 00:09:50.407 real 0m49.129s 00:09:50.407 user 0m40.906s 00:09:50.407 sys 0m20.570s 00:09:50.407 08:51:57 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:50.407 08:51:57 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:09:50.407 08:51:57 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:09:50.407 08:51:57 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:50.407 08:51:57 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:50.407 08:51:57 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:09:50.407 ************************************ 00:09:50.407 START TEST spdk_dd_posix 00:09:50.407 ************************************ 00:09:50.407 08:51:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:09:50.407 * Looking for test storage... 00:09:50.407 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:50.407 08:51:57 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:50.407 08:51:57 spdk_dd.spdk_dd_posix -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:50.407 08:51:57 spdk_dd.spdk_dd_posix -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:50.407 08:51:57 spdk_dd.spdk_dd_posix -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:50.407 08:51:57 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.407 08:51:57 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.407 08:51:57 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.407 08:51:57 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:09:50.407 08:51:57 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.407 08:51:57 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:09:50.407 08:51:57 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:09:50.407 08:51:57 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:09:50.407 08:51:57 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:09:50.407 08:51:57 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:50.407 08:51:57 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:50.407 08:51:57 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:09:50.407 08:51:57 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:09:50.407 * First test run, liburing in use 00:09:50.407 08:51:57 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:09:50.407 08:51:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:50.407 08:51:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:50.407 08:51:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:09:50.407 ************************************ 00:09:50.407 START TEST dd_flag_append 00:09:50.407 ************************************ 00:09:50.407 08:51:57 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1125 -- # append 00:09:50.407 08:51:57 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:09:50.407 08:51:57 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:09:50.407 08:51:57 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:09:50.407 08:51:57 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:09:50.407 08:51:57 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:09:50.407 08:51:57 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=hsn95alectwmltau1tpqwevfgkr67r4r 00:09:50.407 08:51:57 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:09:50.407 08:51:57 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:09:50.407 08:51:57 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:09:50.407 08:51:57 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=10bhcy5snex8rgxyzueghy7hn8ksjsbp 00:09:50.407 08:51:57 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s hsn95alectwmltau1tpqwevfgkr67r4r 00:09:50.407 08:51:57 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s 10bhcy5snex8rgxyzueghy7hn8ksjsbp 00:09:50.407 08:51:57 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:09:50.408 [2024-07-25 08:51:57.376006] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:50.408 [2024-07-25 08:51:57.376164] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63879 ] 00:09:50.665 [2024-07-25 08:51:57.544528] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:50.924 [2024-07-25 08:51:57.815622] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.924 [2024-07-25 08:51:58.018930] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:52.559  Copying: 32/32 [B] (average 31 kBps) 00:09:52.559 00:09:52.559 ************************************ 00:09:52.559 END TEST dd_flag_append 00:09:52.559 ************************************ 00:09:52.559 08:51:59 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ 10bhcy5snex8rgxyzueghy7hn8ksjsbphsn95alectwmltau1tpqwevfgkr67r4r == \1\0\b\h\c\y\5\s\n\e\x\8\r\g\x\y\z\u\e\g\h\y\7\h\n\8\k\s\j\s\b\p\h\s\n\9\5\a\l\e\c\t\w\m\l\t\a\u\1\t\p\q\w\e\v\f\g\k\r\6\7\r\4\r ]] 00:09:52.559 00:09:52.559 real 0m2.031s 00:09:52.559 user 0m1.670s 00:09:52.559 sys 0m0.978s 00:09:52.559 08:51:59 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:52.559 08:51:59 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:09:52.559 08:51:59 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:09:52.559 08:51:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:52.559 08:51:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:52.559 08:51:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:09:52.559 ************************************ 00:09:52.559 START TEST dd_flag_directory 00:09:52.559 ************************************ 00:09:52.559 08:51:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1125 -- # directory 00:09:52.559 08:51:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:52.559 08:51:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # local es=0 00:09:52.559 08:51:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:52.559 08:51:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:52.559 08:51:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:52.559 08:51:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:52.559 08:51:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:52.559 08:51:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:52.559 08:51:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:52.559 08:51:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:52.559 08:51:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:52.559 08:51:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:52.559 [2024-07-25 08:51:59.488831] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:52.559 [2024-07-25 08:51:59.489061] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63925 ] 00:09:52.559 [2024-07-25 08:51:59.669730] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.818 [2024-07-25 08:51:59.906860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.076 [2024-07-25 08:52:00.107238] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:53.334 [2024-07-25 08:52:00.214555] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:09:53.334 [2024-07-25 08:52:00.214628] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:09:53.334 [2024-07-25 08:52:00.214656] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:53.898 [2024-07-25 08:52:00.946237] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:54.464 08:52:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # es=236 00:09:54.464 08:52:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:54.464 08:52:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # es=108 00:09:54.464 08:52:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # case "$es" in 00:09:54.464 08:52:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@670 -- # es=1 00:09:54.464 08:52:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:54.464 08:52:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:09:54.464 08:52:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # local es=0 00:09:54.464 08:52:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:09:54.464 08:52:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:54.464 08:52:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:54.464 08:52:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:54.464 08:52:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:54.464 08:52:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:54.464 08:52:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:54.464 08:52:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:54.464 08:52:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:54.464 08:52:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:09:54.464 [2024-07-25 08:52:01.534636] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:54.464 [2024-07-25 08:52:01.534866] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63952 ] 00:09:54.721 [2024-07-25 08:52:01.712695] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:54.978 [2024-07-25 08:52:01.994473] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.236 [2024-07-25 08:52:02.198723] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:55.236 [2024-07-25 08:52:02.308263] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:09:55.236 [2024-07-25 08:52:02.308336] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:09:55.236 [2024-07-25 08:52:02.308366] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:56.171 [2024-07-25 08:52:03.074789] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:56.428 08:52:03 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # es=236 00:09:56.428 08:52:03 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:56.428 08:52:03 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # es=108 00:09:56.428 08:52:03 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # case "$es" in 00:09:56.428 08:52:03 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@670 -- # es=1 00:09:56.428 08:52:03 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:56.428 00:09:56.428 real 0m4.152s 00:09:56.428 user 0m3.378s 00:09:56.428 sys 0m0.543s 00:09:56.428 08:52:03 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:56.428 08:52:03 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:09:56.428 ************************************ 00:09:56.428 END TEST dd_flag_directory 00:09:56.428 ************************************ 00:09:56.684 08:52:03 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:09:56.684 08:52:03 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:56.684 08:52:03 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:56.684 08:52:03 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:09:56.684 ************************************ 00:09:56.684 START TEST dd_flag_nofollow 00:09:56.684 ************************************ 00:09:56.684 08:52:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1125 -- # nofollow 00:09:56.684 08:52:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:09:56.684 08:52:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:09:56.684 08:52:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:09:56.684 08:52:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:09:56.684 08:52:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:56.684 08:52:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # local es=0 00:09:56.685 08:52:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:56.685 08:52:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:56.685 08:52:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:56.685 08:52:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:56.685 08:52:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:56.685 08:52:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:56.685 08:52:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:56.685 08:52:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:56.685 08:52:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:56.685 08:52:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:56.685 [2024-07-25 08:52:03.690335] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:56.685 [2024-07-25 08:52:03.690497] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63998 ] 00:09:56.940 [2024-07-25 08:52:03.863283] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:57.197 [2024-07-25 08:52:04.136978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.454 [2024-07-25 08:52:04.339882] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:57.454 [2024-07-25 08:52:04.447717] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:09:57.454 [2024-07-25 08:52:04.447790] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:09:57.455 [2024-07-25 08:52:04.447846] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:58.385 [2024-07-25 08:52:05.183268] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:58.643 08:52:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # es=216 00:09:58.643 08:52:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:58.643 08:52:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # es=88 00:09:58.643 08:52:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # case "$es" in 00:09:58.643 08:52:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@670 -- # es=1 00:09:58.643 08:52:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:58.643 08:52:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:09:58.643 08:52:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # local es=0 00:09:58.643 08:52:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:09:58.643 08:52:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:58.643 08:52:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:58.643 08:52:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:58.643 08:52:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:58.643 08:52:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:58.643 08:52:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:58.643 08:52:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:58.643 08:52:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:58.643 08:52:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:09:58.643 [2024-07-25 08:52:05.726461] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:58.643 [2024-07-25 08:52:05.726656] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64025 ] 00:09:58.901 [2024-07-25 08:52:05.897119] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:59.158 [2024-07-25 08:52:06.123513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.416 [2024-07-25 08:52:06.327772] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:59.416 [2024-07-25 08:52:06.434550] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:09:59.416 [2024-07-25 08:52:06.434623] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:09:59.416 [2024-07-25 08:52:06.434671] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:00.349 [2024-07-25 08:52:07.164866] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:10:00.606 08:52:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # es=216 00:10:00.606 08:52:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:00.606 08:52:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # es=88 00:10:00.606 08:52:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # case "$es" in 00:10:00.606 08:52:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@670 -- # es=1 00:10:00.606 08:52:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:00.606 08:52:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:10:00.606 08:52:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:10:00.606 08:52:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:10:00.606 08:52:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:00.606 [2024-07-25 08:52:07.705698] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:00.606 [2024-07-25 08:52:07.706197] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64050 ] 00:10:00.879 [2024-07-25 08:52:07.881540] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.136 [2024-07-25 08:52:08.127472] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.394 [2024-07-25 08:52:08.369290] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:02.770  Copying: 512/512 [B] (average 500 kBps) 00:10:02.770 00:10:02.770 08:52:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ 4v0qk416g6mjncs2e2mci3638x0ea8rhxcsoeqj7cu83pcw54ojwmtdpiaywi5mdy8f41se3wjy3q9cfv1gmhax69oa88r0z6m7v26zeo4z8dqae03wa93zks4y7lxl9y6a12ia46cnqk59lj82lctewlxknrquynrte39f311x1ymgzkaq9mkq4a2is58rmcwl1z8p8tiw14r42jp7urfyztju6tvqf4edaf9fk4qlj6o3dvj6plkyflr1yadhlyi3a5syp5cljm38h3yudhwd0feup62nnqolkmxvf17w6jofak27t328l7lnc2s0p3q1cdpdwxw8uxb5rznbg0eho3h6xm5iffi01ae0tz791ori6yssgmh6krohxztecwmh6h72qdpwkas965doi24ej2o74nkw35fevhgglasuj8z37n473edn5agxdj2xue020u9340zicjdw1pmujmqpzfqbyni4jmovgrfoj8pgsg3zab8vbtirofgecgqk9 == \4\v\0\q\k\4\1\6\g\6\m\j\n\c\s\2\e\2\m\c\i\3\6\3\8\x\0\e\a\8\r\h\x\c\s\o\e\q\j\7\c\u\8\3\p\c\w\5\4\o\j\w\m\t\d\p\i\a\y\w\i\5\m\d\y\8\f\4\1\s\e\3\w\j\y\3\q\9\c\f\v\1\g\m\h\a\x\6\9\o\a\8\8\r\0\z\6\m\7\v\2\6\z\e\o\4\z\8\d\q\a\e\0\3\w\a\9\3\z\k\s\4\y\7\l\x\l\9\y\6\a\1\2\i\a\4\6\c\n\q\k\5\9\l\j\8\2\l\c\t\e\w\l\x\k\n\r\q\u\y\n\r\t\e\3\9\f\3\1\1\x\1\y\m\g\z\k\a\q\9\m\k\q\4\a\2\i\s\5\8\r\m\c\w\l\1\z\8\p\8\t\i\w\1\4\r\4\2\j\p\7\u\r\f\y\z\t\j\u\6\t\v\q\f\4\e\d\a\f\9\f\k\4\q\l\j\6\o\3\d\v\j\6\p\l\k\y\f\l\r\1\y\a\d\h\l\y\i\3\a\5\s\y\p\5\c\l\j\m\3\8\h\3\y\u\d\h\w\d\0\f\e\u\p\6\2\n\n\q\o\l\k\m\x\v\f\1\7\w\6\j\o\f\a\k\2\7\t\3\2\8\l\7\l\n\c\2\s\0\p\3\q\1\c\d\p\d\w\x\w\8\u\x\b\5\r\z\n\b\g\0\e\h\o\3\h\6\x\m\5\i\f\f\i\0\1\a\e\0\t\z\7\9\1\o\r\i\6\y\s\s\g\m\h\6\k\r\o\h\x\z\t\e\c\w\m\h\6\h\7\2\q\d\p\w\k\a\s\9\6\5\d\o\i\2\4\e\j\2\o\7\4\n\k\w\3\5\f\e\v\h\g\g\l\a\s\u\j\8\z\3\7\n\4\7\3\e\d\n\5\a\g\x\d\j\2\x\u\e\0\2\0\u\9\3\4\0\z\i\c\j\d\w\1\p\m\u\j\m\q\p\z\f\q\b\y\n\i\4\j\m\o\v\g\r\f\o\j\8\p\g\s\g\3\z\a\b\8\v\b\t\i\r\o\f\g\e\c\g\q\k\9 ]] 00:10:02.770 ************************************ 00:10:02.770 END TEST dd_flag_nofollow 00:10:02.770 ************************************ 00:10:02.770 00:10:02.770 real 0m6.133s 00:10:02.770 user 0m4.986s 00:10:02.770 sys 0m1.586s 00:10:02.770 08:52:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:02.770 08:52:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:10:02.770 08:52:09 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:10:02.770 08:52:09 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:02.770 08:52:09 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:02.770 08:52:09 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:10:02.770 ************************************ 00:10:02.770 START TEST dd_flag_noatime 00:10:02.770 ************************************ 00:10:02.770 08:52:09 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1125 -- # noatime 00:10:02.770 08:52:09 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:10:02.770 08:52:09 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:10:02.770 08:52:09 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:10:02.770 08:52:09 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:10:02.770 08:52:09 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:10:02.770 08:52:09 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:02.770 08:52:09 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1721897528 00:10:02.770 08:52:09 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:02.770 08:52:09 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1721897529 00:10:02.770 08:52:09 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:10:03.704 08:52:10 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:03.962 [2024-07-25 08:52:10.883845] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:03.962 [2024-07-25 08:52:10.884296] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64110 ] 00:10:03.962 [2024-07-25 08:52:11.060128] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:04.219 [2024-07-25 08:52:11.323521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.786 [2024-07-25 08:52:11.604648] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:06.198  Copying: 512/512 [B] (average 500 kBps) 00:10:06.198 00:10:06.198 08:52:12 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:06.198 08:52:12 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1721897528 )) 00:10:06.198 08:52:12 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:06.198 08:52:12 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1721897529 )) 00:10:06.198 08:52:12 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:06.198 [2024-07-25 08:52:13.016198] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:06.198 [2024-07-25 08:52:13.016385] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64147 ] 00:10:06.198 [2024-07-25 08:52:13.181518] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.481 [2024-07-25 08:52:13.426330] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.739 [2024-07-25 08:52:13.635699] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:08.117  Copying: 512/512 [B] (average 500 kBps) 00:10:08.117 00:10:08.117 08:52:14 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:08.117 ************************************ 00:10:08.117 END TEST dd_flag_noatime 00:10:08.117 ************************************ 00:10:08.117 08:52:14 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1721897533 )) 00:10:08.117 00:10:08.117 real 0m5.233s 00:10:08.117 user 0m3.428s 00:10:08.117 sys 0m2.053s 00:10:08.117 08:52:14 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:08.117 08:52:14 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:10:08.117 08:52:15 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:10:08.117 08:52:15 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:08.117 08:52:15 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:08.117 08:52:15 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:10:08.117 ************************************ 00:10:08.117 START TEST dd_flags_misc 00:10:08.117 ************************************ 00:10:08.117 08:52:15 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1125 -- # io 00:10:08.117 08:52:15 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:10:08.117 08:52:15 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:10:08.117 08:52:15 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:10:08.117 08:52:15 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:10:08.117 08:52:15 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:10:08.117 08:52:15 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:10:08.117 08:52:15 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:10:08.117 08:52:15 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:08.117 08:52:15 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:10:08.117 [2024-07-25 08:52:15.151349] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:08.117 [2024-07-25 08:52:15.152032] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64193 ] 00:10:08.375 [2024-07-25 08:52:15.328933] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:08.633 [2024-07-25 08:52:15.575514] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.892 [2024-07-25 08:52:15.783126] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:10.266  Copying: 512/512 [B] (average 500 kBps) 00:10:10.266 00:10:10.266 08:52:17 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ iga5z39pung1r7zmnywfgdfdrwecsbfbdu8pmhvhsj1hpc2qvlj030eavt8vzddbxcw0w5s654jhn5285van0xje66mq607o2vv2ton5614lz66cf4erg5aruzs65xve5co3jj8tby8t41pk5tcbduke3tmuflw8vsg9rxslkxi5qlpqizd6fsy1qiyiahoe4puljdkpo4dfv60pm9mis2ueixgqeo5cprjdflbdzvjcpsdq3rmmigoo6by7otaarbwyg2kfd59bbdd8xhgzh7mcakkvss77lercu9axbcvq4h588ancdjriv2mxlbs6ve2s6nr6yhajd02y99xi57supyykt7a7e82vs92y8q1qomd9wjo8tuo3vq67oamfi9zjd5ez2u0ncysy9n6eo32shk39xoynalub01i5x2peupzd14wx8wtwxhxd2eojoc65vzcrlmjke4kwgczhku1vsl3tm6t5zkl5hgs8uv4vx0yxzoiusiauubc472jy == \i\g\a\5\z\3\9\p\u\n\g\1\r\7\z\m\n\y\w\f\g\d\f\d\r\w\e\c\s\b\f\b\d\u\8\p\m\h\v\h\s\j\1\h\p\c\2\q\v\l\j\0\3\0\e\a\v\t\8\v\z\d\d\b\x\c\w\0\w\5\s\6\5\4\j\h\n\5\2\8\5\v\a\n\0\x\j\e\6\6\m\q\6\0\7\o\2\v\v\2\t\o\n\5\6\1\4\l\z\6\6\c\f\4\e\r\g\5\a\r\u\z\s\6\5\x\v\e\5\c\o\3\j\j\8\t\b\y\8\t\4\1\p\k\5\t\c\b\d\u\k\e\3\t\m\u\f\l\w\8\v\s\g\9\r\x\s\l\k\x\i\5\q\l\p\q\i\z\d\6\f\s\y\1\q\i\y\i\a\h\o\e\4\p\u\l\j\d\k\p\o\4\d\f\v\6\0\p\m\9\m\i\s\2\u\e\i\x\g\q\e\o\5\c\p\r\j\d\f\l\b\d\z\v\j\c\p\s\d\q\3\r\m\m\i\g\o\o\6\b\y\7\o\t\a\a\r\b\w\y\g\2\k\f\d\5\9\b\b\d\d\8\x\h\g\z\h\7\m\c\a\k\k\v\s\s\7\7\l\e\r\c\u\9\a\x\b\c\v\q\4\h\5\8\8\a\n\c\d\j\r\i\v\2\m\x\l\b\s\6\v\e\2\s\6\n\r\6\y\h\a\j\d\0\2\y\9\9\x\i\5\7\s\u\p\y\y\k\t\7\a\7\e\8\2\v\s\9\2\y\8\q\1\q\o\m\d\9\w\j\o\8\t\u\o\3\v\q\6\7\o\a\m\f\i\9\z\j\d\5\e\z\2\u\0\n\c\y\s\y\9\n\6\e\o\3\2\s\h\k\3\9\x\o\y\n\a\l\u\b\0\1\i\5\x\2\p\e\u\p\z\d\1\4\w\x\8\w\t\w\x\h\x\d\2\e\o\j\o\c\6\5\v\z\c\r\l\m\j\k\e\4\k\w\g\c\z\h\k\u\1\v\s\l\3\t\m\6\t\5\z\k\l\5\h\g\s\8\u\v\4\v\x\0\y\x\z\o\i\u\s\i\a\u\u\b\c\4\7\2\j\y ]] 00:10:10.266 08:52:17 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:10.266 08:52:17 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:10:10.266 [2024-07-25 08:52:17.186116] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:10.266 [2024-07-25 08:52:17.186351] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64220 ] 00:10:10.266 [2024-07-25 08:52:17.366046] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:10.524 [2024-07-25 08:52:17.600192] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:10.782 [2024-07-25 08:52:17.798290] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:12.413  Copying: 512/512 [B] (average 500 kBps) 00:10:12.413 00:10:12.413 08:52:19 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ iga5z39pung1r7zmnywfgdfdrwecsbfbdu8pmhvhsj1hpc2qvlj030eavt8vzddbxcw0w5s654jhn5285van0xje66mq607o2vv2ton5614lz66cf4erg5aruzs65xve5co3jj8tby8t41pk5tcbduke3tmuflw8vsg9rxslkxi5qlpqizd6fsy1qiyiahoe4puljdkpo4dfv60pm9mis2ueixgqeo5cprjdflbdzvjcpsdq3rmmigoo6by7otaarbwyg2kfd59bbdd8xhgzh7mcakkvss77lercu9axbcvq4h588ancdjriv2mxlbs6ve2s6nr6yhajd02y99xi57supyykt7a7e82vs92y8q1qomd9wjo8tuo3vq67oamfi9zjd5ez2u0ncysy9n6eo32shk39xoynalub01i5x2peupzd14wx8wtwxhxd2eojoc65vzcrlmjke4kwgczhku1vsl3tm6t5zkl5hgs8uv4vx0yxzoiusiauubc472jy == \i\g\a\5\z\3\9\p\u\n\g\1\r\7\z\m\n\y\w\f\g\d\f\d\r\w\e\c\s\b\f\b\d\u\8\p\m\h\v\h\s\j\1\h\p\c\2\q\v\l\j\0\3\0\e\a\v\t\8\v\z\d\d\b\x\c\w\0\w\5\s\6\5\4\j\h\n\5\2\8\5\v\a\n\0\x\j\e\6\6\m\q\6\0\7\o\2\v\v\2\t\o\n\5\6\1\4\l\z\6\6\c\f\4\e\r\g\5\a\r\u\z\s\6\5\x\v\e\5\c\o\3\j\j\8\t\b\y\8\t\4\1\p\k\5\t\c\b\d\u\k\e\3\t\m\u\f\l\w\8\v\s\g\9\r\x\s\l\k\x\i\5\q\l\p\q\i\z\d\6\f\s\y\1\q\i\y\i\a\h\o\e\4\p\u\l\j\d\k\p\o\4\d\f\v\6\0\p\m\9\m\i\s\2\u\e\i\x\g\q\e\o\5\c\p\r\j\d\f\l\b\d\z\v\j\c\p\s\d\q\3\r\m\m\i\g\o\o\6\b\y\7\o\t\a\a\r\b\w\y\g\2\k\f\d\5\9\b\b\d\d\8\x\h\g\z\h\7\m\c\a\k\k\v\s\s\7\7\l\e\r\c\u\9\a\x\b\c\v\q\4\h\5\8\8\a\n\c\d\j\r\i\v\2\m\x\l\b\s\6\v\e\2\s\6\n\r\6\y\h\a\j\d\0\2\y\9\9\x\i\5\7\s\u\p\y\y\k\t\7\a\7\e\8\2\v\s\9\2\y\8\q\1\q\o\m\d\9\w\j\o\8\t\u\o\3\v\q\6\7\o\a\m\f\i\9\z\j\d\5\e\z\2\u\0\n\c\y\s\y\9\n\6\e\o\3\2\s\h\k\3\9\x\o\y\n\a\l\u\b\0\1\i\5\x\2\p\e\u\p\z\d\1\4\w\x\8\w\t\w\x\h\x\d\2\e\o\j\o\c\6\5\v\z\c\r\l\m\j\k\e\4\k\w\g\c\z\h\k\u\1\v\s\l\3\t\m\6\t\5\z\k\l\5\h\g\s\8\u\v\4\v\x\0\y\x\z\o\i\u\s\i\a\u\u\b\c\4\7\2\j\y ]] 00:10:12.413 08:52:19 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:12.413 08:52:19 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:10:12.413 [2024-07-25 08:52:19.218042] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:12.413 [2024-07-25 08:52:19.218300] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64247 ] 00:10:12.413 [2024-07-25 08:52:19.406154] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:12.672 [2024-07-25 08:52:19.648641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.930 [2024-07-25 08:52:19.855821] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:14.302  Copying: 512/512 [B] (average 166 kBps) 00:10:14.302 00:10:14.302 08:52:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ iga5z39pung1r7zmnywfgdfdrwecsbfbdu8pmhvhsj1hpc2qvlj030eavt8vzddbxcw0w5s654jhn5285van0xje66mq607o2vv2ton5614lz66cf4erg5aruzs65xve5co3jj8tby8t41pk5tcbduke3tmuflw8vsg9rxslkxi5qlpqizd6fsy1qiyiahoe4puljdkpo4dfv60pm9mis2ueixgqeo5cprjdflbdzvjcpsdq3rmmigoo6by7otaarbwyg2kfd59bbdd8xhgzh7mcakkvss77lercu9axbcvq4h588ancdjriv2mxlbs6ve2s6nr6yhajd02y99xi57supyykt7a7e82vs92y8q1qomd9wjo8tuo3vq67oamfi9zjd5ez2u0ncysy9n6eo32shk39xoynalub01i5x2peupzd14wx8wtwxhxd2eojoc65vzcrlmjke4kwgczhku1vsl3tm6t5zkl5hgs8uv4vx0yxzoiusiauubc472jy == \i\g\a\5\z\3\9\p\u\n\g\1\r\7\z\m\n\y\w\f\g\d\f\d\r\w\e\c\s\b\f\b\d\u\8\p\m\h\v\h\s\j\1\h\p\c\2\q\v\l\j\0\3\0\e\a\v\t\8\v\z\d\d\b\x\c\w\0\w\5\s\6\5\4\j\h\n\5\2\8\5\v\a\n\0\x\j\e\6\6\m\q\6\0\7\o\2\v\v\2\t\o\n\5\6\1\4\l\z\6\6\c\f\4\e\r\g\5\a\r\u\z\s\6\5\x\v\e\5\c\o\3\j\j\8\t\b\y\8\t\4\1\p\k\5\t\c\b\d\u\k\e\3\t\m\u\f\l\w\8\v\s\g\9\r\x\s\l\k\x\i\5\q\l\p\q\i\z\d\6\f\s\y\1\q\i\y\i\a\h\o\e\4\p\u\l\j\d\k\p\o\4\d\f\v\6\0\p\m\9\m\i\s\2\u\e\i\x\g\q\e\o\5\c\p\r\j\d\f\l\b\d\z\v\j\c\p\s\d\q\3\r\m\m\i\g\o\o\6\b\y\7\o\t\a\a\r\b\w\y\g\2\k\f\d\5\9\b\b\d\d\8\x\h\g\z\h\7\m\c\a\k\k\v\s\s\7\7\l\e\r\c\u\9\a\x\b\c\v\q\4\h\5\8\8\a\n\c\d\j\r\i\v\2\m\x\l\b\s\6\v\e\2\s\6\n\r\6\y\h\a\j\d\0\2\y\9\9\x\i\5\7\s\u\p\y\y\k\t\7\a\7\e\8\2\v\s\9\2\y\8\q\1\q\o\m\d\9\w\j\o\8\t\u\o\3\v\q\6\7\o\a\m\f\i\9\z\j\d\5\e\z\2\u\0\n\c\y\s\y\9\n\6\e\o\3\2\s\h\k\3\9\x\o\y\n\a\l\u\b\0\1\i\5\x\2\p\e\u\p\z\d\1\4\w\x\8\w\t\w\x\h\x\d\2\e\o\j\o\c\6\5\v\z\c\r\l\m\j\k\e\4\k\w\g\c\z\h\k\u\1\v\s\l\3\t\m\6\t\5\z\k\l\5\h\g\s\8\u\v\4\v\x\0\y\x\z\o\i\u\s\i\a\u\u\b\c\4\7\2\j\y ]] 00:10:14.302 08:52:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:14.302 08:52:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:10:14.303 [2024-07-25 08:52:21.235065] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:14.303 [2024-07-25 08:52:21.235241] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64274 ] 00:10:14.303 [2024-07-25 08:52:21.408898] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:14.560 [2024-07-25 08:52:21.647871] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.831 [2024-07-25 08:52:21.851134] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:16.461  Copying: 512/512 [B] (average 166 kBps) 00:10:16.461 00:10:16.461 08:52:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ iga5z39pung1r7zmnywfgdfdrwecsbfbdu8pmhvhsj1hpc2qvlj030eavt8vzddbxcw0w5s654jhn5285van0xje66mq607o2vv2ton5614lz66cf4erg5aruzs65xve5co3jj8tby8t41pk5tcbduke3tmuflw8vsg9rxslkxi5qlpqizd6fsy1qiyiahoe4puljdkpo4dfv60pm9mis2ueixgqeo5cprjdflbdzvjcpsdq3rmmigoo6by7otaarbwyg2kfd59bbdd8xhgzh7mcakkvss77lercu9axbcvq4h588ancdjriv2mxlbs6ve2s6nr6yhajd02y99xi57supyykt7a7e82vs92y8q1qomd9wjo8tuo3vq67oamfi9zjd5ez2u0ncysy9n6eo32shk39xoynalub01i5x2peupzd14wx8wtwxhxd2eojoc65vzcrlmjke4kwgczhku1vsl3tm6t5zkl5hgs8uv4vx0yxzoiusiauubc472jy == \i\g\a\5\z\3\9\p\u\n\g\1\r\7\z\m\n\y\w\f\g\d\f\d\r\w\e\c\s\b\f\b\d\u\8\p\m\h\v\h\s\j\1\h\p\c\2\q\v\l\j\0\3\0\e\a\v\t\8\v\z\d\d\b\x\c\w\0\w\5\s\6\5\4\j\h\n\5\2\8\5\v\a\n\0\x\j\e\6\6\m\q\6\0\7\o\2\v\v\2\t\o\n\5\6\1\4\l\z\6\6\c\f\4\e\r\g\5\a\r\u\z\s\6\5\x\v\e\5\c\o\3\j\j\8\t\b\y\8\t\4\1\p\k\5\t\c\b\d\u\k\e\3\t\m\u\f\l\w\8\v\s\g\9\r\x\s\l\k\x\i\5\q\l\p\q\i\z\d\6\f\s\y\1\q\i\y\i\a\h\o\e\4\p\u\l\j\d\k\p\o\4\d\f\v\6\0\p\m\9\m\i\s\2\u\e\i\x\g\q\e\o\5\c\p\r\j\d\f\l\b\d\z\v\j\c\p\s\d\q\3\r\m\m\i\g\o\o\6\b\y\7\o\t\a\a\r\b\w\y\g\2\k\f\d\5\9\b\b\d\d\8\x\h\g\z\h\7\m\c\a\k\k\v\s\s\7\7\l\e\r\c\u\9\a\x\b\c\v\q\4\h\5\8\8\a\n\c\d\j\r\i\v\2\m\x\l\b\s\6\v\e\2\s\6\n\r\6\y\h\a\j\d\0\2\y\9\9\x\i\5\7\s\u\p\y\y\k\t\7\a\7\e\8\2\v\s\9\2\y\8\q\1\q\o\m\d\9\w\j\o\8\t\u\o\3\v\q\6\7\o\a\m\f\i\9\z\j\d\5\e\z\2\u\0\n\c\y\s\y\9\n\6\e\o\3\2\s\h\k\3\9\x\o\y\n\a\l\u\b\0\1\i\5\x\2\p\e\u\p\z\d\1\4\w\x\8\w\t\w\x\h\x\d\2\e\o\j\o\c\6\5\v\z\c\r\l\m\j\k\e\4\k\w\g\c\z\h\k\u\1\v\s\l\3\t\m\6\t\5\z\k\l\5\h\g\s\8\u\v\4\v\x\0\y\x\z\o\i\u\s\i\a\u\u\b\c\4\7\2\j\y ]] 00:10:16.461 08:52:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:10:16.461 08:52:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:10:16.461 08:52:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:10:16.461 08:52:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:10:16.461 08:52:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:16.461 08:52:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:10:16.461 [2024-07-25 08:52:23.287230] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:16.461 [2024-07-25 08:52:23.287490] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64296 ] 00:10:16.461 [2024-07-25 08:52:23.469776] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.718 [2024-07-25 08:52:23.734319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.977 [2024-07-25 08:52:23.940276] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:18.357  Copying: 512/512 [B] (average 500 kBps) 00:10:18.357 00:10:18.357 08:52:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 3q03il5qqln5z39italsl8eid1vtgmiv2ze7ngjt4zemg8a6b00rjan07r3e2w007e0kgf9czygaet4mnvck2xxjufufkggc4rvzhysbmjdihovxtyzisg8ojo9fnsivnp3fyktypeto3un0g6rb26lt0xw0wtulxgwmg0gvlqpr3isg7kqpqpq1tbap8d0pnq31biupzydyn0idtvuo9woyhv6ukfg0c8waxiovmyy2a5uujc6zkanwhy4qkekmg8omqyn0zhpm8clhuhlbz7ot5i1g3l2b98d6nnxj86bgk5e66ijbjy0q3zbxj74gswrc7umsgjei4zx5jyxw5dzhtj4402ejjccevf8kx2clblcu6b1vux3c3m4zgpy057lxanubxbqqgfpoigwrrgf4a452o54tvvudhh92lndv7yaaczyany81l3prcwwh4gsdgb3pfdgbtuwdaurg4par4mny8f0fulxoh6i50h8el4b5ce5qa86x2s0rmce4 == \3\q\0\3\i\l\5\q\q\l\n\5\z\3\9\i\t\a\l\s\l\8\e\i\d\1\v\t\g\m\i\v\2\z\e\7\n\g\j\t\4\z\e\m\g\8\a\6\b\0\0\r\j\a\n\0\7\r\3\e\2\w\0\0\7\e\0\k\g\f\9\c\z\y\g\a\e\t\4\m\n\v\c\k\2\x\x\j\u\f\u\f\k\g\g\c\4\r\v\z\h\y\s\b\m\j\d\i\h\o\v\x\t\y\z\i\s\g\8\o\j\o\9\f\n\s\i\v\n\p\3\f\y\k\t\y\p\e\t\o\3\u\n\0\g\6\r\b\2\6\l\t\0\x\w\0\w\t\u\l\x\g\w\m\g\0\g\v\l\q\p\r\3\i\s\g\7\k\q\p\q\p\q\1\t\b\a\p\8\d\0\p\n\q\3\1\b\i\u\p\z\y\d\y\n\0\i\d\t\v\u\o\9\w\o\y\h\v\6\u\k\f\g\0\c\8\w\a\x\i\o\v\m\y\y\2\a\5\u\u\j\c\6\z\k\a\n\w\h\y\4\q\k\e\k\m\g\8\o\m\q\y\n\0\z\h\p\m\8\c\l\h\u\h\l\b\z\7\o\t\5\i\1\g\3\l\2\b\9\8\d\6\n\n\x\j\8\6\b\g\k\5\e\6\6\i\j\b\j\y\0\q\3\z\b\x\j\7\4\g\s\w\r\c\7\u\m\s\g\j\e\i\4\z\x\5\j\y\x\w\5\d\z\h\t\j\4\4\0\2\e\j\j\c\c\e\v\f\8\k\x\2\c\l\b\l\c\u\6\b\1\v\u\x\3\c\3\m\4\z\g\p\y\0\5\7\l\x\a\n\u\b\x\b\q\q\g\f\p\o\i\g\w\r\r\g\f\4\a\4\5\2\o\5\4\t\v\v\u\d\h\h\9\2\l\n\d\v\7\y\a\a\c\z\y\a\n\y\8\1\l\3\p\r\c\w\w\h\4\g\s\d\g\b\3\p\f\d\g\b\t\u\w\d\a\u\r\g\4\p\a\r\4\m\n\y\8\f\0\f\u\l\x\o\h\6\i\5\0\h\8\e\l\4\b\5\c\e\5\q\a\8\6\x\2\s\0\r\m\c\e\4 ]] 00:10:18.357 08:52:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:18.357 08:52:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:10:18.357 [2024-07-25 08:52:25.376153] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:18.357 [2024-07-25 08:52:25.376376] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64323 ] 00:10:18.650 [2024-07-25 08:52:25.553724] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:18.908 [2024-07-25 08:52:25.817994] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.167 [2024-07-25 08:52:26.024171] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:20.540  Copying: 512/512 [B] (average 500 kBps) 00:10:20.540 00:10:20.540 08:52:27 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 3q03il5qqln5z39italsl8eid1vtgmiv2ze7ngjt4zemg8a6b00rjan07r3e2w007e0kgf9czygaet4mnvck2xxjufufkggc4rvzhysbmjdihovxtyzisg8ojo9fnsivnp3fyktypeto3un0g6rb26lt0xw0wtulxgwmg0gvlqpr3isg7kqpqpq1tbap8d0pnq31biupzydyn0idtvuo9woyhv6ukfg0c8waxiovmyy2a5uujc6zkanwhy4qkekmg8omqyn0zhpm8clhuhlbz7ot5i1g3l2b98d6nnxj86bgk5e66ijbjy0q3zbxj74gswrc7umsgjei4zx5jyxw5dzhtj4402ejjccevf8kx2clblcu6b1vux3c3m4zgpy057lxanubxbqqgfpoigwrrgf4a452o54tvvudhh92lndv7yaaczyany81l3prcwwh4gsdgb3pfdgbtuwdaurg4par4mny8f0fulxoh6i50h8el4b5ce5qa86x2s0rmce4 == \3\q\0\3\i\l\5\q\q\l\n\5\z\3\9\i\t\a\l\s\l\8\e\i\d\1\v\t\g\m\i\v\2\z\e\7\n\g\j\t\4\z\e\m\g\8\a\6\b\0\0\r\j\a\n\0\7\r\3\e\2\w\0\0\7\e\0\k\g\f\9\c\z\y\g\a\e\t\4\m\n\v\c\k\2\x\x\j\u\f\u\f\k\g\g\c\4\r\v\z\h\y\s\b\m\j\d\i\h\o\v\x\t\y\z\i\s\g\8\o\j\o\9\f\n\s\i\v\n\p\3\f\y\k\t\y\p\e\t\o\3\u\n\0\g\6\r\b\2\6\l\t\0\x\w\0\w\t\u\l\x\g\w\m\g\0\g\v\l\q\p\r\3\i\s\g\7\k\q\p\q\p\q\1\t\b\a\p\8\d\0\p\n\q\3\1\b\i\u\p\z\y\d\y\n\0\i\d\t\v\u\o\9\w\o\y\h\v\6\u\k\f\g\0\c\8\w\a\x\i\o\v\m\y\y\2\a\5\u\u\j\c\6\z\k\a\n\w\h\y\4\q\k\e\k\m\g\8\o\m\q\y\n\0\z\h\p\m\8\c\l\h\u\h\l\b\z\7\o\t\5\i\1\g\3\l\2\b\9\8\d\6\n\n\x\j\8\6\b\g\k\5\e\6\6\i\j\b\j\y\0\q\3\z\b\x\j\7\4\g\s\w\r\c\7\u\m\s\g\j\e\i\4\z\x\5\j\y\x\w\5\d\z\h\t\j\4\4\0\2\e\j\j\c\c\e\v\f\8\k\x\2\c\l\b\l\c\u\6\b\1\v\u\x\3\c\3\m\4\z\g\p\y\0\5\7\l\x\a\n\u\b\x\b\q\q\g\f\p\o\i\g\w\r\r\g\f\4\a\4\5\2\o\5\4\t\v\v\u\d\h\h\9\2\l\n\d\v\7\y\a\a\c\z\y\a\n\y\8\1\l\3\p\r\c\w\w\h\4\g\s\d\g\b\3\p\f\d\g\b\t\u\w\d\a\u\r\g\4\p\a\r\4\m\n\y\8\f\0\f\u\l\x\o\h\6\i\5\0\h\8\e\l\4\b\5\c\e\5\q\a\8\6\x\2\s\0\r\m\c\e\4 ]] 00:10:20.540 08:52:27 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:20.540 08:52:27 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:10:20.540 [2024-07-25 08:52:27.455767] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:20.540 [2024-07-25 08:52:27.456041] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64350 ] 00:10:20.540 [2024-07-25 08:52:27.634514] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:20.798 [2024-07-25 08:52:27.876836] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.056 [2024-07-25 08:52:28.087288] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:22.708  Copying: 512/512 [B] (average 250 kBps) 00:10:22.708 00:10:22.708 08:52:29 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 3q03il5qqln5z39italsl8eid1vtgmiv2ze7ngjt4zemg8a6b00rjan07r3e2w007e0kgf9czygaet4mnvck2xxjufufkggc4rvzhysbmjdihovxtyzisg8ojo9fnsivnp3fyktypeto3un0g6rb26lt0xw0wtulxgwmg0gvlqpr3isg7kqpqpq1tbap8d0pnq31biupzydyn0idtvuo9woyhv6ukfg0c8waxiovmyy2a5uujc6zkanwhy4qkekmg8omqyn0zhpm8clhuhlbz7ot5i1g3l2b98d6nnxj86bgk5e66ijbjy0q3zbxj74gswrc7umsgjei4zx5jyxw5dzhtj4402ejjccevf8kx2clblcu6b1vux3c3m4zgpy057lxanubxbqqgfpoigwrrgf4a452o54tvvudhh92lndv7yaaczyany81l3prcwwh4gsdgb3pfdgbtuwdaurg4par4mny8f0fulxoh6i50h8el4b5ce5qa86x2s0rmce4 == \3\q\0\3\i\l\5\q\q\l\n\5\z\3\9\i\t\a\l\s\l\8\e\i\d\1\v\t\g\m\i\v\2\z\e\7\n\g\j\t\4\z\e\m\g\8\a\6\b\0\0\r\j\a\n\0\7\r\3\e\2\w\0\0\7\e\0\k\g\f\9\c\z\y\g\a\e\t\4\m\n\v\c\k\2\x\x\j\u\f\u\f\k\g\g\c\4\r\v\z\h\y\s\b\m\j\d\i\h\o\v\x\t\y\z\i\s\g\8\o\j\o\9\f\n\s\i\v\n\p\3\f\y\k\t\y\p\e\t\o\3\u\n\0\g\6\r\b\2\6\l\t\0\x\w\0\w\t\u\l\x\g\w\m\g\0\g\v\l\q\p\r\3\i\s\g\7\k\q\p\q\p\q\1\t\b\a\p\8\d\0\p\n\q\3\1\b\i\u\p\z\y\d\y\n\0\i\d\t\v\u\o\9\w\o\y\h\v\6\u\k\f\g\0\c\8\w\a\x\i\o\v\m\y\y\2\a\5\u\u\j\c\6\z\k\a\n\w\h\y\4\q\k\e\k\m\g\8\o\m\q\y\n\0\z\h\p\m\8\c\l\h\u\h\l\b\z\7\o\t\5\i\1\g\3\l\2\b\9\8\d\6\n\n\x\j\8\6\b\g\k\5\e\6\6\i\j\b\j\y\0\q\3\z\b\x\j\7\4\g\s\w\r\c\7\u\m\s\g\j\e\i\4\z\x\5\j\y\x\w\5\d\z\h\t\j\4\4\0\2\e\j\j\c\c\e\v\f\8\k\x\2\c\l\b\l\c\u\6\b\1\v\u\x\3\c\3\m\4\z\g\p\y\0\5\7\l\x\a\n\u\b\x\b\q\q\g\f\p\o\i\g\w\r\r\g\f\4\a\4\5\2\o\5\4\t\v\v\u\d\h\h\9\2\l\n\d\v\7\y\a\a\c\z\y\a\n\y\8\1\l\3\p\r\c\w\w\h\4\g\s\d\g\b\3\p\f\d\g\b\t\u\w\d\a\u\r\g\4\p\a\r\4\m\n\y\8\f\0\f\u\l\x\o\h\6\i\5\0\h\8\e\l\4\b\5\c\e\5\q\a\8\6\x\2\s\0\r\m\c\e\4 ]] 00:10:22.708 08:52:29 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:22.708 08:52:29 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:10:22.708 [2024-07-25 08:52:29.497047] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:22.708 [2024-07-25 08:52:29.497241] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64378 ] 00:10:22.708 [2024-07-25 08:52:29.670022] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:22.967 [2024-07-25 08:52:29.915762] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.225 [2024-07-25 08:52:30.127285] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:24.601  Copying: 512/512 [B] (average 250 kBps) 00:10:24.601 00:10:24.601 08:52:31 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 3q03il5qqln5z39italsl8eid1vtgmiv2ze7ngjt4zemg8a6b00rjan07r3e2w007e0kgf9czygaet4mnvck2xxjufufkggc4rvzhysbmjdihovxtyzisg8ojo9fnsivnp3fyktypeto3un0g6rb26lt0xw0wtulxgwmg0gvlqpr3isg7kqpqpq1tbap8d0pnq31biupzydyn0idtvuo9woyhv6ukfg0c8waxiovmyy2a5uujc6zkanwhy4qkekmg8omqyn0zhpm8clhuhlbz7ot5i1g3l2b98d6nnxj86bgk5e66ijbjy0q3zbxj74gswrc7umsgjei4zx5jyxw5dzhtj4402ejjccevf8kx2clblcu6b1vux3c3m4zgpy057lxanubxbqqgfpoigwrrgf4a452o54tvvudhh92lndv7yaaczyany81l3prcwwh4gsdgb3pfdgbtuwdaurg4par4mny8f0fulxoh6i50h8el4b5ce5qa86x2s0rmce4 == \3\q\0\3\i\l\5\q\q\l\n\5\z\3\9\i\t\a\l\s\l\8\e\i\d\1\v\t\g\m\i\v\2\z\e\7\n\g\j\t\4\z\e\m\g\8\a\6\b\0\0\r\j\a\n\0\7\r\3\e\2\w\0\0\7\e\0\k\g\f\9\c\z\y\g\a\e\t\4\m\n\v\c\k\2\x\x\j\u\f\u\f\k\g\g\c\4\r\v\z\h\y\s\b\m\j\d\i\h\o\v\x\t\y\z\i\s\g\8\o\j\o\9\f\n\s\i\v\n\p\3\f\y\k\t\y\p\e\t\o\3\u\n\0\g\6\r\b\2\6\l\t\0\x\w\0\w\t\u\l\x\g\w\m\g\0\g\v\l\q\p\r\3\i\s\g\7\k\q\p\q\p\q\1\t\b\a\p\8\d\0\p\n\q\3\1\b\i\u\p\z\y\d\y\n\0\i\d\t\v\u\o\9\w\o\y\h\v\6\u\k\f\g\0\c\8\w\a\x\i\o\v\m\y\y\2\a\5\u\u\j\c\6\z\k\a\n\w\h\y\4\q\k\e\k\m\g\8\o\m\q\y\n\0\z\h\p\m\8\c\l\h\u\h\l\b\z\7\o\t\5\i\1\g\3\l\2\b\9\8\d\6\n\n\x\j\8\6\b\g\k\5\e\6\6\i\j\b\j\y\0\q\3\z\b\x\j\7\4\g\s\w\r\c\7\u\m\s\g\j\e\i\4\z\x\5\j\y\x\w\5\d\z\h\t\j\4\4\0\2\e\j\j\c\c\e\v\f\8\k\x\2\c\l\b\l\c\u\6\b\1\v\u\x\3\c\3\m\4\z\g\p\y\0\5\7\l\x\a\n\u\b\x\b\q\q\g\f\p\o\i\g\w\r\r\g\f\4\a\4\5\2\o\5\4\t\v\v\u\d\h\h\9\2\l\n\d\v\7\y\a\a\c\z\y\a\n\y\8\1\l\3\p\r\c\w\w\h\4\g\s\d\g\b\3\p\f\d\g\b\t\u\w\d\a\u\r\g\4\p\a\r\4\m\n\y\8\f\0\f\u\l\x\o\h\6\i\5\0\h\8\e\l\4\b\5\c\e\5\q\a\8\6\x\2\s\0\r\m\c\e\4 ]] 00:10:24.601 00:10:24.601 real 0m16.438s 00:10:24.601 user 0m13.321s 00:10:24.601 sys 0m8.210s 00:10:24.601 08:52:31 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:24.601 08:52:31 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:10:24.601 ************************************ 00:10:24.601 END TEST dd_flags_misc 00:10:24.601 ************************************ 00:10:24.601 08:52:31 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:10:24.601 08:52:31 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:10:24.601 * Second test run, disabling liburing, forcing AIO 00:10:24.601 08:52:31 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:10:24.601 08:52:31 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:10:24.601 08:52:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:24.601 08:52:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:24.601 08:52:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:10:24.601 ************************************ 00:10:24.601 START TEST dd_flag_append_forced_aio 00:10:24.601 ************************************ 00:10:24.601 08:52:31 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1125 -- # append 00:10:24.601 08:52:31 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:10:24.601 08:52:31 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:10:24.601 08:52:31 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:10:24.601 08:52:31 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:10:24.601 08:52:31 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:10:24.601 08:52:31 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=zb1ixhkmu5mwumjylxdn8xxgiv75ylso 00:10:24.601 08:52:31 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:10:24.601 08:52:31 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:10:24.601 08:52:31 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:10:24.601 08:52:31 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=qc0dommirfbqng0684v3awjwkq1eaa7x 00:10:24.601 08:52:31 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s zb1ixhkmu5mwumjylxdn8xxgiv75ylso 00:10:24.601 08:52:31 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s qc0dommirfbqng0684v3awjwkq1eaa7x 00:10:24.601 08:52:31 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:10:24.601 [2024-07-25 08:52:31.658342] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:24.601 [2024-07-25 08:52:31.658923] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64424 ] 00:10:24.860 [2024-07-25 08:52:31.834941] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:25.118 [2024-07-25 08:52:32.086000] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:25.378 [2024-07-25 08:52:32.293700] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:26.764  Copying: 32/32 [B] (average 31 kBps) 00:10:26.764 00:10:26.764 08:52:33 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ qc0dommirfbqng0684v3awjwkq1eaa7xzb1ixhkmu5mwumjylxdn8xxgiv75ylso == \q\c\0\d\o\m\m\i\r\f\b\q\n\g\0\6\8\4\v\3\a\w\j\w\k\q\1\e\a\a\7\x\z\b\1\i\x\h\k\m\u\5\m\w\u\m\j\y\l\x\d\n\8\x\x\g\i\v\7\5\y\l\s\o ]] 00:10:26.764 00:10:26.764 real 0m2.122s 00:10:26.764 user 0m1.700s 00:10:26.764 sys 0m0.295s 00:10:26.764 08:52:33 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:26.764 08:52:33 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:10:26.764 ************************************ 00:10:26.764 END TEST dd_flag_append_forced_aio 00:10:26.764 ************************************ 00:10:26.764 08:52:33 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:10:26.764 08:52:33 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:26.764 08:52:33 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:26.764 08:52:33 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:10:26.764 ************************************ 00:10:26.764 START TEST dd_flag_directory_forced_aio 00:10:26.764 ************************************ 00:10:26.764 08:52:33 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1125 -- # directory 00:10:26.764 08:52:33 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:26.764 08:52:33 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:10:26.764 08:52:33 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:26.764 08:52:33 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:26.764 08:52:33 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:26.764 08:52:33 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:26.764 08:52:33 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:26.764 08:52:33 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:26.764 08:52:33 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:26.764 08:52:33 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:26.764 08:52:33 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:26.764 08:52:33 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:26.764 [2024-07-25 08:52:33.818086] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:26.764 [2024-07-25 08:52:33.818268] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64468 ] 00:10:27.023 [2024-07-25 08:52:33.998294] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:27.281 [2024-07-25 08:52:34.292266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.540 [2024-07-25 08:52:34.542463] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:27.798 [2024-07-25 08:52:34.658740] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:10:27.799 [2024-07-25 08:52:34.658805] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:10:27.799 [2024-07-25 08:52:34.658893] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:28.364 [2024-07-25 08:52:35.419351] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:10:28.931 08:52:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # es=236 00:10:28.932 08:52:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:28.932 08:52:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # es=108 00:10:28.932 08:52:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:10:28.932 08:52:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:10:28.932 08:52:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:28.932 08:52:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:10:28.932 08:52:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:10:28.932 08:52:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:10:28.932 08:52:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:28.932 08:52:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:28.932 08:52:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:28.932 08:52:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:28.932 08:52:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:28.932 08:52:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:28.932 08:52:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:28.932 08:52:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:28.932 08:52:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:10:28.932 [2024-07-25 08:52:35.991209] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:28.932 [2024-07-25 08:52:35.991422] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64495 ] 00:10:29.190 [2024-07-25 08:52:36.166277] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.449 [2024-07-25 08:52:36.452086] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.708 [2024-07-25 08:52:36.660491] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:29.708 [2024-07-25 08:52:36.772539] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:10:29.708 [2024-07-25 08:52:36.772605] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:10:29.708 [2024-07-25 08:52:36.772634] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:30.644 [2024-07-25 08:52:37.555711] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:10:30.902 08:52:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # es=236 00:10:30.902 08:52:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:30.902 08:52:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # es=108 00:10:30.902 ************************************ 00:10:30.902 END TEST dd_flag_directory_forced_aio 00:10:30.902 ************************************ 00:10:30.902 08:52:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:10:30.902 08:52:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:10:30.902 08:52:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:30.902 00:10:30.902 real 0m4.288s 00:10:30.902 user 0m3.481s 00:10:30.902 sys 0m0.577s 00:10:30.902 08:52:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:30.902 08:52:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:10:31.160 08:52:38 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:10:31.160 08:52:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:31.160 08:52:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:31.160 08:52:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:10:31.160 ************************************ 00:10:31.160 START TEST dd_flag_nofollow_forced_aio 00:10:31.160 ************************************ 00:10:31.160 08:52:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1125 -- # nofollow 00:10:31.160 08:52:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:10:31.160 08:52:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:10:31.160 08:52:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:10:31.160 08:52:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:10:31.160 08:52:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:31.160 08:52:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:10:31.160 08:52:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:31.160 08:52:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:31.160 08:52:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:31.160 08:52:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:31.160 08:52:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:31.161 08:52:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:31.161 08:52:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:31.161 08:52:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:31.161 08:52:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:31.161 08:52:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:31.161 [2024-07-25 08:52:38.185010] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:31.161 [2024-07-25 08:52:38.185200] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64541 ] 00:10:31.418 [2024-07-25 08:52:38.371244] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:31.676 [2024-07-25 08:52:38.647971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.935 [2024-07-25 08:52:38.841177] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:31.935 [2024-07-25 08:52:38.946781] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:10:31.935 [2024-07-25 08:52:38.946904] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:10:31.935 [2024-07-25 08:52:38.946954] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:32.870 [2024-07-25 08:52:39.690274] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:10:33.128 08:52:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # es=216 00:10:33.128 08:52:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:33.128 08:52:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # es=88 00:10:33.128 08:52:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:10:33.128 08:52:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:10:33.128 08:52:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:33.128 08:52:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:10:33.128 08:52:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:10:33.128 08:52:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:10:33.128 08:52:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:33.128 08:52:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:33.128 08:52:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:33.128 08:52:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:33.128 08:52:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:33.128 08:52:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:33.128 08:52:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:33.128 08:52:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:33.128 08:52:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:10:33.385 [2024-07-25 08:52:40.252019] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:33.385 [2024-07-25 08:52:40.252264] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64568 ] 00:10:33.385 [2024-07-25 08:52:40.429231] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:33.666 [2024-07-25 08:52:40.700052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.926 [2024-07-25 08:52:40.906410] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:33.926 [2024-07-25 08:52:41.015731] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:10:33.926 [2024-07-25 08:52:41.015798] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:10:33.926 [2024-07-25 08:52:41.015882] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:34.861 [2024-07-25 08:52:41.724347] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:10:35.119 08:52:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # es=216 00:10:35.119 08:52:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:35.119 08:52:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # es=88 00:10:35.119 08:52:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:10:35.119 08:52:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:10:35.119 08:52:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:35.119 08:52:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:10:35.119 08:52:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:10:35.119 08:52:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:10:35.119 08:52:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:35.377 [2024-07-25 08:52:42.263472] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:35.377 [2024-07-25 08:52:42.263660] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64593 ] 00:10:35.377 [2024-07-25 08:52:42.437084] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:35.634 [2024-07-25 08:52:42.674680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:35.892 [2024-07-25 08:52:42.867313] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:37.266  Copying: 512/512 [B] (average 500 kBps) 00:10:37.266 00:10:37.266 ************************************ 00:10:37.266 END TEST dd_flag_nofollow_forced_aio 00:10:37.266 ************************************ 00:10:37.266 08:52:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ t43yaxwjduraikclxwuo59uw1kgs0222xvy7iah0x41vxr3a7kuo9lv1ux56jwo8toni5oocag0k0vr3zhmka65v10hlerr31m3wlx0vd1ykw3did2xdyw6988jt1p50ageny3su31dbkee2rsskflg0npk00gjyu8n8yst4aiie3p22klcq1zy3wkw4tapu47wmb0t21h7hj5thzmn2n9l3tqw912wh1qsseknso1rwr2qhko9ief15iwhaaf4p7fzk88urxo5en3grnyux2qroj1dg8hmvbv90scgply5v7zu2akw3f8ymphn7pbzk6bn0io85ie8wej8wgqu0tdf86n5zgll7jz6me95jmaled2sek8y2jjtvey28lo7voh4bzasxo19goz84yh3daykzwwu08n1ndw04zetobuy0guydke6xxxr5ijlubygz7zclg0362m0342qtbt3foxdyylsndvr1tzananaj77m1udvn68tyiqd0vx43bemc == \t\4\3\y\a\x\w\j\d\u\r\a\i\k\c\l\x\w\u\o\5\9\u\w\1\k\g\s\0\2\2\2\x\v\y\7\i\a\h\0\x\4\1\v\x\r\3\a\7\k\u\o\9\l\v\1\u\x\5\6\j\w\o\8\t\o\n\i\5\o\o\c\a\g\0\k\0\v\r\3\z\h\m\k\a\6\5\v\1\0\h\l\e\r\r\3\1\m\3\w\l\x\0\v\d\1\y\k\w\3\d\i\d\2\x\d\y\w\6\9\8\8\j\t\1\p\5\0\a\g\e\n\y\3\s\u\3\1\d\b\k\e\e\2\r\s\s\k\f\l\g\0\n\p\k\0\0\g\j\y\u\8\n\8\y\s\t\4\a\i\i\e\3\p\2\2\k\l\c\q\1\z\y\3\w\k\w\4\t\a\p\u\4\7\w\m\b\0\t\2\1\h\7\h\j\5\t\h\z\m\n\2\n\9\l\3\t\q\w\9\1\2\w\h\1\q\s\s\e\k\n\s\o\1\r\w\r\2\q\h\k\o\9\i\e\f\1\5\i\w\h\a\a\f\4\p\7\f\z\k\8\8\u\r\x\o\5\e\n\3\g\r\n\y\u\x\2\q\r\o\j\1\d\g\8\h\m\v\b\v\9\0\s\c\g\p\l\y\5\v\7\z\u\2\a\k\w\3\f\8\y\m\p\h\n\7\p\b\z\k\6\b\n\0\i\o\8\5\i\e\8\w\e\j\8\w\g\q\u\0\t\d\f\8\6\n\5\z\g\l\l\7\j\z\6\m\e\9\5\j\m\a\l\e\d\2\s\e\k\8\y\2\j\j\t\v\e\y\2\8\l\o\7\v\o\h\4\b\z\a\s\x\o\1\9\g\o\z\8\4\y\h\3\d\a\y\k\z\w\w\u\0\8\n\1\n\d\w\0\4\z\e\t\o\b\u\y\0\g\u\y\d\k\e\6\x\x\x\r\5\i\j\l\u\b\y\g\z\7\z\c\l\g\0\3\6\2\m\0\3\4\2\q\t\b\t\3\f\o\x\d\y\y\l\s\n\d\v\r\1\t\z\a\n\a\n\a\j\7\7\m\1\u\d\v\n\6\8\t\y\i\q\d\0\v\x\4\3\b\e\m\c ]] 00:10:37.266 00:10:37.266 real 0m6.069s 00:10:37.266 user 0m4.870s 00:10:37.266 sys 0m0.846s 00:10:37.266 08:52:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:37.266 08:52:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:10:37.266 08:52:44 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:10:37.266 08:52:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:37.266 08:52:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:37.266 08:52:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:10:37.266 ************************************ 00:10:37.266 START TEST dd_flag_noatime_forced_aio 00:10:37.266 ************************************ 00:10:37.266 08:52:44 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1125 -- # noatime 00:10:37.266 08:52:44 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:10:37.266 08:52:44 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:10:37.266 08:52:44 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:10:37.266 08:52:44 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:10:37.266 08:52:44 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:10:37.266 08:52:44 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:37.266 08:52:44 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1721897562 00:10:37.266 08:52:44 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:37.266 08:52:44 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1721897564 00:10:37.266 08:52:44 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:10:38.221 08:52:45 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:38.479 [2024-07-25 08:52:45.336711] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:38.479 [2024-07-25 08:52:45.336955] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64651 ] 00:10:38.479 [2024-07-25 08:52:45.515576] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:38.738 [2024-07-25 08:52:45.774655] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.997 [2024-07-25 08:52:45.980235] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:40.374  Copying: 512/512 [B] (average 500 kBps) 00:10:40.374 00:10:40.374 08:52:47 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:40.374 08:52:47 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1721897562 )) 00:10:40.374 08:52:47 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:40.374 08:52:47 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1721897564 )) 00:10:40.374 08:52:47 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:40.374 [2024-07-25 08:52:47.368790] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:40.374 [2024-07-25 08:52:47.368997] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64680 ] 00:10:40.632 [2024-07-25 08:52:47.545486] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:40.891 [2024-07-25 08:52:47.769970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.891 [2024-07-25 08:52:47.964246] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:42.525  Copying: 512/512 [B] (average 500 kBps) 00:10:42.526 00:10:42.526 08:52:49 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:42.526 ************************************ 00:10:42.526 END TEST dd_flag_noatime_forced_aio 00:10:42.526 ************************************ 00:10:42.526 08:52:49 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1721897568 )) 00:10:42.526 00:10:42.526 real 0m5.081s 00:10:42.526 user 0m3.245s 00:10:42.526 sys 0m0.585s 00:10:42.526 08:52:49 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:42.526 08:52:49 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:10:42.526 08:52:49 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:10:42.526 08:52:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:42.526 08:52:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:42.526 08:52:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:10:42.526 ************************************ 00:10:42.526 START TEST dd_flags_misc_forced_aio 00:10:42.526 ************************************ 00:10:42.526 08:52:49 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1125 -- # io 00:10:42.526 08:52:49 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:10:42.526 08:52:49 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:10:42.526 08:52:49 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:10:42.526 08:52:49 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:10:42.526 08:52:49 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:10:42.526 08:52:49 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:10:42.526 08:52:49 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:10:42.526 08:52:49 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:42.526 08:52:49 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:10:42.526 [2024-07-25 08:52:49.435087] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:42.526 [2024-07-25 08:52:49.435283] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64724 ] 00:10:42.526 [2024-07-25 08:52:49.605537] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:42.785 [2024-07-25 08:52:49.832648] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.043 [2024-07-25 08:52:50.026044] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:44.419  Copying: 512/512 [B] (average 500 kBps) 00:10:44.419 00:10:44.419 08:52:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ tebqbb9s2ae9q97kaa6fmcnrom5v3wjuh6751kilojsvox5h7qjukyfmytd6vgsxas2j4zwm62vwqdaa7r2v0xjhyoheuon05x7izqhrizr91ek74ilubejg867ifaqffqljpuizsnmi07jcmm86qef6nti7esrcep20lwjirs5f5j8vkl4uc4xl6ro4gydgaw9vyk449yp6k40lzasbhlaegbag4pxxumcpem5ojznn1rn72njoxe13t4ol2wwglalnoc5ah78ueertx8vi4uvvvyfrf1vuboan7hkqxv263sxiaw8cuys69gkjtzwd5j56goiiw8rq1d4nhddxeyv689dcyhvs4bx03gxl4uvr9z0zayjcp34pdmjihf4g38fph5gmkwsluhpoek6vkx08mzd2aqbcnmx2jkgr55b5xe2ps8qvy1mgfza15ztg7pe25bmxo6tyabvkqg47zku3s13o9hy2ovtbygqcqnry42jztamg5iql23fu26pg == \t\e\b\q\b\b\9\s\2\a\e\9\q\9\7\k\a\a\6\f\m\c\n\r\o\m\5\v\3\w\j\u\h\6\7\5\1\k\i\l\o\j\s\v\o\x\5\h\7\q\j\u\k\y\f\m\y\t\d\6\v\g\s\x\a\s\2\j\4\z\w\m\6\2\v\w\q\d\a\a\7\r\2\v\0\x\j\h\y\o\h\e\u\o\n\0\5\x\7\i\z\q\h\r\i\z\r\9\1\e\k\7\4\i\l\u\b\e\j\g\8\6\7\i\f\a\q\f\f\q\l\j\p\u\i\z\s\n\m\i\0\7\j\c\m\m\8\6\q\e\f\6\n\t\i\7\e\s\r\c\e\p\2\0\l\w\j\i\r\s\5\f\5\j\8\v\k\l\4\u\c\4\x\l\6\r\o\4\g\y\d\g\a\w\9\v\y\k\4\4\9\y\p\6\k\4\0\l\z\a\s\b\h\l\a\e\g\b\a\g\4\p\x\x\u\m\c\p\e\m\5\o\j\z\n\n\1\r\n\7\2\n\j\o\x\e\1\3\t\4\o\l\2\w\w\g\l\a\l\n\o\c\5\a\h\7\8\u\e\e\r\t\x\8\v\i\4\u\v\v\v\y\f\r\f\1\v\u\b\o\a\n\7\h\k\q\x\v\2\6\3\s\x\i\a\w\8\c\u\y\s\6\9\g\k\j\t\z\w\d\5\j\5\6\g\o\i\i\w\8\r\q\1\d\4\n\h\d\d\x\e\y\v\6\8\9\d\c\y\h\v\s\4\b\x\0\3\g\x\l\4\u\v\r\9\z\0\z\a\y\j\c\p\3\4\p\d\m\j\i\h\f\4\g\3\8\f\p\h\5\g\m\k\w\s\l\u\h\p\o\e\k\6\v\k\x\0\8\m\z\d\2\a\q\b\c\n\m\x\2\j\k\g\r\5\5\b\5\x\e\2\p\s\8\q\v\y\1\m\g\f\z\a\1\5\z\t\g\7\p\e\2\5\b\m\x\o\6\t\y\a\b\v\k\q\g\4\7\z\k\u\3\s\1\3\o\9\h\y\2\o\v\t\b\y\g\q\c\q\n\r\y\4\2\j\z\t\a\m\g\5\i\q\l\2\3\f\u\2\6\p\g ]] 00:10:44.419 08:52:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:44.419 08:52:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:10:44.419 [2024-07-25 08:52:51.420690] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:44.419 [2024-07-25 08:52:51.420920] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64749 ] 00:10:44.678 [2024-07-25 08:52:51.589396] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:44.936 [2024-07-25 08:52:51.837461] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.936 [2024-07-25 08:52:52.041849] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:46.571  Copying: 512/512 [B] (average 500 kBps) 00:10:46.571 00:10:46.571 08:52:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ tebqbb9s2ae9q97kaa6fmcnrom5v3wjuh6751kilojsvox5h7qjukyfmytd6vgsxas2j4zwm62vwqdaa7r2v0xjhyoheuon05x7izqhrizr91ek74ilubejg867ifaqffqljpuizsnmi07jcmm86qef6nti7esrcep20lwjirs5f5j8vkl4uc4xl6ro4gydgaw9vyk449yp6k40lzasbhlaegbag4pxxumcpem5ojznn1rn72njoxe13t4ol2wwglalnoc5ah78ueertx8vi4uvvvyfrf1vuboan7hkqxv263sxiaw8cuys69gkjtzwd5j56goiiw8rq1d4nhddxeyv689dcyhvs4bx03gxl4uvr9z0zayjcp34pdmjihf4g38fph5gmkwsluhpoek6vkx08mzd2aqbcnmx2jkgr55b5xe2ps8qvy1mgfza15ztg7pe25bmxo6tyabvkqg47zku3s13o9hy2ovtbygqcqnry42jztamg5iql23fu26pg == \t\e\b\q\b\b\9\s\2\a\e\9\q\9\7\k\a\a\6\f\m\c\n\r\o\m\5\v\3\w\j\u\h\6\7\5\1\k\i\l\o\j\s\v\o\x\5\h\7\q\j\u\k\y\f\m\y\t\d\6\v\g\s\x\a\s\2\j\4\z\w\m\6\2\v\w\q\d\a\a\7\r\2\v\0\x\j\h\y\o\h\e\u\o\n\0\5\x\7\i\z\q\h\r\i\z\r\9\1\e\k\7\4\i\l\u\b\e\j\g\8\6\7\i\f\a\q\f\f\q\l\j\p\u\i\z\s\n\m\i\0\7\j\c\m\m\8\6\q\e\f\6\n\t\i\7\e\s\r\c\e\p\2\0\l\w\j\i\r\s\5\f\5\j\8\v\k\l\4\u\c\4\x\l\6\r\o\4\g\y\d\g\a\w\9\v\y\k\4\4\9\y\p\6\k\4\0\l\z\a\s\b\h\l\a\e\g\b\a\g\4\p\x\x\u\m\c\p\e\m\5\o\j\z\n\n\1\r\n\7\2\n\j\o\x\e\1\3\t\4\o\l\2\w\w\g\l\a\l\n\o\c\5\a\h\7\8\u\e\e\r\t\x\8\v\i\4\u\v\v\v\y\f\r\f\1\v\u\b\o\a\n\7\h\k\q\x\v\2\6\3\s\x\i\a\w\8\c\u\y\s\6\9\g\k\j\t\z\w\d\5\j\5\6\g\o\i\i\w\8\r\q\1\d\4\n\h\d\d\x\e\y\v\6\8\9\d\c\y\h\v\s\4\b\x\0\3\g\x\l\4\u\v\r\9\z\0\z\a\y\j\c\p\3\4\p\d\m\j\i\h\f\4\g\3\8\f\p\h\5\g\m\k\w\s\l\u\h\p\o\e\k\6\v\k\x\0\8\m\z\d\2\a\q\b\c\n\m\x\2\j\k\g\r\5\5\b\5\x\e\2\p\s\8\q\v\y\1\m\g\f\z\a\1\5\z\t\g\7\p\e\2\5\b\m\x\o\6\t\y\a\b\v\k\q\g\4\7\z\k\u\3\s\1\3\o\9\h\y\2\o\v\t\b\y\g\q\c\q\n\r\y\4\2\j\z\t\a\m\g\5\i\q\l\2\3\f\u\2\6\p\g ]] 00:10:46.571 08:52:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:46.571 08:52:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:10:46.571 [2024-07-25 08:52:53.489693] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:46.571 [2024-07-25 08:52:53.489929] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64773 ] 00:10:46.571 [2024-07-25 08:52:53.666704] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:46.830 [2024-07-25 08:52:53.908029] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.088 [2024-07-25 08:52:54.112566] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:48.304  Copying: 512/512 [B] (average 166 kBps) 00:10:48.304 00:10:48.563 08:52:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ tebqbb9s2ae9q97kaa6fmcnrom5v3wjuh6751kilojsvox5h7qjukyfmytd6vgsxas2j4zwm62vwqdaa7r2v0xjhyoheuon05x7izqhrizr91ek74ilubejg867ifaqffqljpuizsnmi07jcmm86qef6nti7esrcep20lwjirs5f5j8vkl4uc4xl6ro4gydgaw9vyk449yp6k40lzasbhlaegbag4pxxumcpem5ojznn1rn72njoxe13t4ol2wwglalnoc5ah78ueertx8vi4uvvvyfrf1vuboan7hkqxv263sxiaw8cuys69gkjtzwd5j56goiiw8rq1d4nhddxeyv689dcyhvs4bx03gxl4uvr9z0zayjcp34pdmjihf4g38fph5gmkwsluhpoek6vkx08mzd2aqbcnmx2jkgr55b5xe2ps8qvy1mgfza15ztg7pe25bmxo6tyabvkqg47zku3s13o9hy2ovtbygqcqnry42jztamg5iql23fu26pg == \t\e\b\q\b\b\9\s\2\a\e\9\q\9\7\k\a\a\6\f\m\c\n\r\o\m\5\v\3\w\j\u\h\6\7\5\1\k\i\l\o\j\s\v\o\x\5\h\7\q\j\u\k\y\f\m\y\t\d\6\v\g\s\x\a\s\2\j\4\z\w\m\6\2\v\w\q\d\a\a\7\r\2\v\0\x\j\h\y\o\h\e\u\o\n\0\5\x\7\i\z\q\h\r\i\z\r\9\1\e\k\7\4\i\l\u\b\e\j\g\8\6\7\i\f\a\q\f\f\q\l\j\p\u\i\z\s\n\m\i\0\7\j\c\m\m\8\6\q\e\f\6\n\t\i\7\e\s\r\c\e\p\2\0\l\w\j\i\r\s\5\f\5\j\8\v\k\l\4\u\c\4\x\l\6\r\o\4\g\y\d\g\a\w\9\v\y\k\4\4\9\y\p\6\k\4\0\l\z\a\s\b\h\l\a\e\g\b\a\g\4\p\x\x\u\m\c\p\e\m\5\o\j\z\n\n\1\r\n\7\2\n\j\o\x\e\1\3\t\4\o\l\2\w\w\g\l\a\l\n\o\c\5\a\h\7\8\u\e\e\r\t\x\8\v\i\4\u\v\v\v\y\f\r\f\1\v\u\b\o\a\n\7\h\k\q\x\v\2\6\3\s\x\i\a\w\8\c\u\y\s\6\9\g\k\j\t\z\w\d\5\j\5\6\g\o\i\i\w\8\r\q\1\d\4\n\h\d\d\x\e\y\v\6\8\9\d\c\y\h\v\s\4\b\x\0\3\g\x\l\4\u\v\r\9\z\0\z\a\y\j\c\p\3\4\p\d\m\j\i\h\f\4\g\3\8\f\p\h\5\g\m\k\w\s\l\u\h\p\o\e\k\6\v\k\x\0\8\m\z\d\2\a\q\b\c\n\m\x\2\j\k\g\r\5\5\b\5\x\e\2\p\s\8\q\v\y\1\m\g\f\z\a\1\5\z\t\g\7\p\e\2\5\b\m\x\o\6\t\y\a\b\v\k\q\g\4\7\z\k\u\3\s\1\3\o\9\h\y\2\o\v\t\b\y\g\q\c\q\n\r\y\4\2\j\z\t\a\m\g\5\i\q\l\2\3\f\u\2\6\p\g ]] 00:10:48.563 08:52:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:48.563 08:52:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:10:48.563 [2024-07-25 08:52:55.539063] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:48.563 [2024-07-25 08:52:55.539249] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64794 ] 00:10:48.821 [2024-07-25 08:52:55.714673] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:49.080 [2024-07-25 08:52:55.956600] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.080 [2024-07-25 08:52:56.162538] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:50.712  Copying: 512/512 [B] (average 250 kBps) 00:10:50.712 00:10:50.713 08:52:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ tebqbb9s2ae9q97kaa6fmcnrom5v3wjuh6751kilojsvox5h7qjukyfmytd6vgsxas2j4zwm62vwqdaa7r2v0xjhyoheuon05x7izqhrizr91ek74ilubejg867ifaqffqljpuizsnmi07jcmm86qef6nti7esrcep20lwjirs5f5j8vkl4uc4xl6ro4gydgaw9vyk449yp6k40lzasbhlaegbag4pxxumcpem5ojznn1rn72njoxe13t4ol2wwglalnoc5ah78ueertx8vi4uvvvyfrf1vuboan7hkqxv263sxiaw8cuys69gkjtzwd5j56goiiw8rq1d4nhddxeyv689dcyhvs4bx03gxl4uvr9z0zayjcp34pdmjihf4g38fph5gmkwsluhpoek6vkx08mzd2aqbcnmx2jkgr55b5xe2ps8qvy1mgfza15ztg7pe25bmxo6tyabvkqg47zku3s13o9hy2ovtbygqcqnry42jztamg5iql23fu26pg == \t\e\b\q\b\b\9\s\2\a\e\9\q\9\7\k\a\a\6\f\m\c\n\r\o\m\5\v\3\w\j\u\h\6\7\5\1\k\i\l\o\j\s\v\o\x\5\h\7\q\j\u\k\y\f\m\y\t\d\6\v\g\s\x\a\s\2\j\4\z\w\m\6\2\v\w\q\d\a\a\7\r\2\v\0\x\j\h\y\o\h\e\u\o\n\0\5\x\7\i\z\q\h\r\i\z\r\9\1\e\k\7\4\i\l\u\b\e\j\g\8\6\7\i\f\a\q\f\f\q\l\j\p\u\i\z\s\n\m\i\0\7\j\c\m\m\8\6\q\e\f\6\n\t\i\7\e\s\r\c\e\p\2\0\l\w\j\i\r\s\5\f\5\j\8\v\k\l\4\u\c\4\x\l\6\r\o\4\g\y\d\g\a\w\9\v\y\k\4\4\9\y\p\6\k\4\0\l\z\a\s\b\h\l\a\e\g\b\a\g\4\p\x\x\u\m\c\p\e\m\5\o\j\z\n\n\1\r\n\7\2\n\j\o\x\e\1\3\t\4\o\l\2\w\w\g\l\a\l\n\o\c\5\a\h\7\8\u\e\e\r\t\x\8\v\i\4\u\v\v\v\y\f\r\f\1\v\u\b\o\a\n\7\h\k\q\x\v\2\6\3\s\x\i\a\w\8\c\u\y\s\6\9\g\k\j\t\z\w\d\5\j\5\6\g\o\i\i\w\8\r\q\1\d\4\n\h\d\d\x\e\y\v\6\8\9\d\c\y\h\v\s\4\b\x\0\3\g\x\l\4\u\v\r\9\z\0\z\a\y\j\c\p\3\4\p\d\m\j\i\h\f\4\g\3\8\f\p\h\5\g\m\k\w\s\l\u\h\p\o\e\k\6\v\k\x\0\8\m\z\d\2\a\q\b\c\n\m\x\2\j\k\g\r\5\5\b\5\x\e\2\p\s\8\q\v\y\1\m\g\f\z\a\1\5\z\t\g\7\p\e\2\5\b\m\x\o\6\t\y\a\b\v\k\q\g\4\7\z\k\u\3\s\1\3\o\9\h\y\2\o\v\t\b\y\g\q\c\q\n\r\y\4\2\j\z\t\a\m\g\5\i\q\l\2\3\f\u\2\6\p\g ]] 00:10:50.713 08:52:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:10:50.713 08:52:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:10:50.713 08:52:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:10:50.713 08:52:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:10:50.713 08:52:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:50.713 08:52:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:10:50.713 [2024-07-25 08:52:57.555032] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:50.713 [2024-07-25 08:52:57.555532] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64819 ] 00:10:50.713 [2024-07-25 08:52:57.729691] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:50.971 [2024-07-25 08:52:57.967595] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.229 [2024-07-25 08:52:58.171416] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:52.604  Copying: 512/512 [B] (average 500 kBps) 00:10:52.604 00:10:52.604 08:52:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ah7wf07j2ikvxwypq51xsgsm4wul1l2m2ll1detwayey72ndpdzd10s237rhrdo3qlmgfa8ueii7rzjf9klj8kkkue04ggmgh4uaiaralufy2phzfgyh4tc9xxvxjjss6rlovoy92wi37wss1miaw35aua3d1zcdptyv68bo59fskh3pm8lrrnqawc2jn9rjy2fbybs1d1r2tegsa6keflpox578ec9xue291yki8leedswv06mct3qdtcvd64rig7is6m3sx6fl0qud393fji42xnzomq3ajwjcby4mx86bbbckpuaznt9qeu7r09kl3m91qlazn4wezxl9u2ig7do9g7gyjwz01u1rfyh5i2jns43px07ps51vegz3jbr2ay5tps0o3khi2fda96fh2xjb8pmwxknmi3wsgtoid3b4gmywo32gg581mndz79v0q01z720rkryv9iycv4eut6opgtx5rmc8pbjlxpl9dggav7lqwa7tbkjjvcdteksk == \a\h\7\w\f\0\7\j\2\i\k\v\x\w\y\p\q\5\1\x\s\g\s\m\4\w\u\l\1\l\2\m\2\l\l\1\d\e\t\w\a\y\e\y\7\2\n\d\p\d\z\d\1\0\s\2\3\7\r\h\r\d\o\3\q\l\m\g\f\a\8\u\e\i\i\7\r\z\j\f\9\k\l\j\8\k\k\k\u\e\0\4\g\g\m\g\h\4\u\a\i\a\r\a\l\u\f\y\2\p\h\z\f\g\y\h\4\t\c\9\x\x\v\x\j\j\s\s\6\r\l\o\v\o\y\9\2\w\i\3\7\w\s\s\1\m\i\a\w\3\5\a\u\a\3\d\1\z\c\d\p\t\y\v\6\8\b\o\5\9\f\s\k\h\3\p\m\8\l\r\r\n\q\a\w\c\2\j\n\9\r\j\y\2\f\b\y\b\s\1\d\1\r\2\t\e\g\s\a\6\k\e\f\l\p\o\x\5\7\8\e\c\9\x\u\e\2\9\1\y\k\i\8\l\e\e\d\s\w\v\0\6\m\c\t\3\q\d\t\c\v\d\6\4\r\i\g\7\i\s\6\m\3\s\x\6\f\l\0\q\u\d\3\9\3\f\j\i\4\2\x\n\z\o\m\q\3\a\j\w\j\c\b\y\4\m\x\8\6\b\b\b\c\k\p\u\a\z\n\t\9\q\e\u\7\r\0\9\k\l\3\m\9\1\q\l\a\z\n\4\w\e\z\x\l\9\u\2\i\g\7\d\o\9\g\7\g\y\j\w\z\0\1\u\1\r\f\y\h\5\i\2\j\n\s\4\3\p\x\0\7\p\s\5\1\v\e\g\z\3\j\b\r\2\a\y\5\t\p\s\0\o\3\k\h\i\2\f\d\a\9\6\f\h\2\x\j\b\8\p\m\w\x\k\n\m\i\3\w\s\g\t\o\i\d\3\b\4\g\m\y\w\o\3\2\g\g\5\8\1\m\n\d\z\7\9\v\0\q\0\1\z\7\2\0\r\k\r\y\v\9\i\y\c\v\4\e\u\t\6\o\p\g\t\x\5\r\m\c\8\p\b\j\l\x\p\l\9\d\g\g\a\v\7\l\q\w\a\7\t\b\k\j\j\v\c\d\t\e\k\s\k ]] 00:10:52.604 08:52:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:52.604 08:52:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:10:52.604 [2024-07-25 08:52:59.586277] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:52.604 [2024-07-25 08:52:59.586453] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64844 ] 00:10:52.862 [2024-07-25 08:52:59.761358] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:53.121 [2024-07-25 08:52:59.994598] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.121 [2024-07-25 08:53:00.197897] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:54.753  Copying: 512/512 [B] (average 500 kBps) 00:10:54.753 00:10:54.753 08:53:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ah7wf07j2ikvxwypq51xsgsm4wul1l2m2ll1detwayey72ndpdzd10s237rhrdo3qlmgfa8ueii7rzjf9klj8kkkue04ggmgh4uaiaralufy2phzfgyh4tc9xxvxjjss6rlovoy92wi37wss1miaw35aua3d1zcdptyv68bo59fskh3pm8lrrnqawc2jn9rjy2fbybs1d1r2tegsa6keflpox578ec9xue291yki8leedswv06mct3qdtcvd64rig7is6m3sx6fl0qud393fji42xnzomq3ajwjcby4mx86bbbckpuaznt9qeu7r09kl3m91qlazn4wezxl9u2ig7do9g7gyjwz01u1rfyh5i2jns43px07ps51vegz3jbr2ay5tps0o3khi2fda96fh2xjb8pmwxknmi3wsgtoid3b4gmywo32gg581mndz79v0q01z720rkryv9iycv4eut6opgtx5rmc8pbjlxpl9dggav7lqwa7tbkjjvcdteksk == \a\h\7\w\f\0\7\j\2\i\k\v\x\w\y\p\q\5\1\x\s\g\s\m\4\w\u\l\1\l\2\m\2\l\l\1\d\e\t\w\a\y\e\y\7\2\n\d\p\d\z\d\1\0\s\2\3\7\r\h\r\d\o\3\q\l\m\g\f\a\8\u\e\i\i\7\r\z\j\f\9\k\l\j\8\k\k\k\u\e\0\4\g\g\m\g\h\4\u\a\i\a\r\a\l\u\f\y\2\p\h\z\f\g\y\h\4\t\c\9\x\x\v\x\j\j\s\s\6\r\l\o\v\o\y\9\2\w\i\3\7\w\s\s\1\m\i\a\w\3\5\a\u\a\3\d\1\z\c\d\p\t\y\v\6\8\b\o\5\9\f\s\k\h\3\p\m\8\l\r\r\n\q\a\w\c\2\j\n\9\r\j\y\2\f\b\y\b\s\1\d\1\r\2\t\e\g\s\a\6\k\e\f\l\p\o\x\5\7\8\e\c\9\x\u\e\2\9\1\y\k\i\8\l\e\e\d\s\w\v\0\6\m\c\t\3\q\d\t\c\v\d\6\4\r\i\g\7\i\s\6\m\3\s\x\6\f\l\0\q\u\d\3\9\3\f\j\i\4\2\x\n\z\o\m\q\3\a\j\w\j\c\b\y\4\m\x\8\6\b\b\b\c\k\p\u\a\z\n\t\9\q\e\u\7\r\0\9\k\l\3\m\9\1\q\l\a\z\n\4\w\e\z\x\l\9\u\2\i\g\7\d\o\9\g\7\g\y\j\w\z\0\1\u\1\r\f\y\h\5\i\2\j\n\s\4\3\p\x\0\7\p\s\5\1\v\e\g\z\3\j\b\r\2\a\y\5\t\p\s\0\o\3\k\h\i\2\f\d\a\9\6\f\h\2\x\j\b\8\p\m\w\x\k\n\m\i\3\w\s\g\t\o\i\d\3\b\4\g\m\y\w\o\3\2\g\g\5\8\1\m\n\d\z\7\9\v\0\q\0\1\z\7\2\0\r\k\r\y\v\9\i\y\c\v\4\e\u\t\6\o\p\g\t\x\5\r\m\c\8\p\b\j\l\x\p\l\9\d\g\g\a\v\7\l\q\w\a\7\t\b\k\j\j\v\c\d\t\e\k\s\k ]] 00:10:54.753 08:53:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:54.753 08:53:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:10:54.753 [2024-07-25 08:53:01.634280] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:54.753 [2024-07-25 08:53:01.634489] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64869 ] 00:10:54.753 [2024-07-25 08:53:01.807638] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:55.010 [2024-07-25 08:53:02.052958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.267 [2024-07-25 08:53:02.257262] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:56.642  Copying: 512/512 [B] (average 500 kBps) 00:10:56.642 00:10:56.642 08:53:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ah7wf07j2ikvxwypq51xsgsm4wul1l2m2ll1detwayey72ndpdzd10s237rhrdo3qlmgfa8ueii7rzjf9klj8kkkue04ggmgh4uaiaralufy2phzfgyh4tc9xxvxjjss6rlovoy92wi37wss1miaw35aua3d1zcdptyv68bo59fskh3pm8lrrnqawc2jn9rjy2fbybs1d1r2tegsa6keflpox578ec9xue291yki8leedswv06mct3qdtcvd64rig7is6m3sx6fl0qud393fji42xnzomq3ajwjcby4mx86bbbckpuaznt9qeu7r09kl3m91qlazn4wezxl9u2ig7do9g7gyjwz01u1rfyh5i2jns43px07ps51vegz3jbr2ay5tps0o3khi2fda96fh2xjb8pmwxknmi3wsgtoid3b4gmywo32gg581mndz79v0q01z720rkryv9iycv4eut6opgtx5rmc8pbjlxpl9dggav7lqwa7tbkjjvcdteksk == \a\h\7\w\f\0\7\j\2\i\k\v\x\w\y\p\q\5\1\x\s\g\s\m\4\w\u\l\1\l\2\m\2\l\l\1\d\e\t\w\a\y\e\y\7\2\n\d\p\d\z\d\1\0\s\2\3\7\r\h\r\d\o\3\q\l\m\g\f\a\8\u\e\i\i\7\r\z\j\f\9\k\l\j\8\k\k\k\u\e\0\4\g\g\m\g\h\4\u\a\i\a\r\a\l\u\f\y\2\p\h\z\f\g\y\h\4\t\c\9\x\x\v\x\j\j\s\s\6\r\l\o\v\o\y\9\2\w\i\3\7\w\s\s\1\m\i\a\w\3\5\a\u\a\3\d\1\z\c\d\p\t\y\v\6\8\b\o\5\9\f\s\k\h\3\p\m\8\l\r\r\n\q\a\w\c\2\j\n\9\r\j\y\2\f\b\y\b\s\1\d\1\r\2\t\e\g\s\a\6\k\e\f\l\p\o\x\5\7\8\e\c\9\x\u\e\2\9\1\y\k\i\8\l\e\e\d\s\w\v\0\6\m\c\t\3\q\d\t\c\v\d\6\4\r\i\g\7\i\s\6\m\3\s\x\6\f\l\0\q\u\d\3\9\3\f\j\i\4\2\x\n\z\o\m\q\3\a\j\w\j\c\b\y\4\m\x\8\6\b\b\b\c\k\p\u\a\z\n\t\9\q\e\u\7\r\0\9\k\l\3\m\9\1\q\l\a\z\n\4\w\e\z\x\l\9\u\2\i\g\7\d\o\9\g\7\g\y\j\w\z\0\1\u\1\r\f\y\h\5\i\2\j\n\s\4\3\p\x\0\7\p\s\5\1\v\e\g\z\3\j\b\r\2\a\y\5\t\p\s\0\o\3\k\h\i\2\f\d\a\9\6\f\h\2\x\j\b\8\p\m\w\x\k\n\m\i\3\w\s\g\t\o\i\d\3\b\4\g\m\y\w\o\3\2\g\g\5\8\1\m\n\d\z\7\9\v\0\q\0\1\z\7\2\0\r\k\r\y\v\9\i\y\c\v\4\e\u\t\6\o\p\g\t\x\5\r\m\c\8\p\b\j\l\x\p\l\9\d\g\g\a\v\7\l\q\w\a\7\t\b\k\j\j\v\c\d\t\e\k\s\k ]] 00:10:56.642 08:53:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:56.642 08:53:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:10:56.642 [2024-07-25 08:53:03.676135] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:56.642 [2024-07-25 08:53:03.676338] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64894 ] 00:10:56.901 [2024-07-25 08:53:03.846974] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:57.160 [2024-07-25 08:53:04.083827] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.419 [2024-07-25 08:53:04.286185] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:58.794  Copying: 512/512 [B] (average 250 kBps) 00:10:58.794 00:10:58.794 08:53:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ah7wf07j2ikvxwypq51xsgsm4wul1l2m2ll1detwayey72ndpdzd10s237rhrdo3qlmgfa8ueii7rzjf9klj8kkkue04ggmgh4uaiaralufy2phzfgyh4tc9xxvxjjss6rlovoy92wi37wss1miaw35aua3d1zcdptyv68bo59fskh3pm8lrrnqawc2jn9rjy2fbybs1d1r2tegsa6keflpox578ec9xue291yki8leedswv06mct3qdtcvd64rig7is6m3sx6fl0qud393fji42xnzomq3ajwjcby4mx86bbbckpuaznt9qeu7r09kl3m91qlazn4wezxl9u2ig7do9g7gyjwz01u1rfyh5i2jns43px07ps51vegz3jbr2ay5tps0o3khi2fda96fh2xjb8pmwxknmi3wsgtoid3b4gmywo32gg581mndz79v0q01z720rkryv9iycv4eut6opgtx5rmc8pbjlxpl9dggav7lqwa7tbkjjvcdteksk == \a\h\7\w\f\0\7\j\2\i\k\v\x\w\y\p\q\5\1\x\s\g\s\m\4\w\u\l\1\l\2\m\2\l\l\1\d\e\t\w\a\y\e\y\7\2\n\d\p\d\z\d\1\0\s\2\3\7\r\h\r\d\o\3\q\l\m\g\f\a\8\u\e\i\i\7\r\z\j\f\9\k\l\j\8\k\k\k\u\e\0\4\g\g\m\g\h\4\u\a\i\a\r\a\l\u\f\y\2\p\h\z\f\g\y\h\4\t\c\9\x\x\v\x\j\j\s\s\6\r\l\o\v\o\y\9\2\w\i\3\7\w\s\s\1\m\i\a\w\3\5\a\u\a\3\d\1\z\c\d\p\t\y\v\6\8\b\o\5\9\f\s\k\h\3\p\m\8\l\r\r\n\q\a\w\c\2\j\n\9\r\j\y\2\f\b\y\b\s\1\d\1\r\2\t\e\g\s\a\6\k\e\f\l\p\o\x\5\7\8\e\c\9\x\u\e\2\9\1\y\k\i\8\l\e\e\d\s\w\v\0\6\m\c\t\3\q\d\t\c\v\d\6\4\r\i\g\7\i\s\6\m\3\s\x\6\f\l\0\q\u\d\3\9\3\f\j\i\4\2\x\n\z\o\m\q\3\a\j\w\j\c\b\y\4\m\x\8\6\b\b\b\c\k\p\u\a\z\n\t\9\q\e\u\7\r\0\9\k\l\3\m\9\1\q\l\a\z\n\4\w\e\z\x\l\9\u\2\i\g\7\d\o\9\g\7\g\y\j\w\z\0\1\u\1\r\f\y\h\5\i\2\j\n\s\4\3\p\x\0\7\p\s\5\1\v\e\g\z\3\j\b\r\2\a\y\5\t\p\s\0\o\3\k\h\i\2\f\d\a\9\6\f\h\2\x\j\b\8\p\m\w\x\k\n\m\i\3\w\s\g\t\o\i\d\3\b\4\g\m\y\w\o\3\2\g\g\5\8\1\m\n\d\z\7\9\v\0\q\0\1\z\7\2\0\r\k\r\y\v\9\i\y\c\v\4\e\u\t\6\o\p\g\t\x\5\r\m\c\8\p\b\j\l\x\p\l\9\d\g\g\a\v\7\l\q\w\a\7\t\b\k\j\j\v\c\d\t\e\k\s\k ]] 00:10:58.794 00:10:58.794 real 0m16.266s 00:10:58.794 user 0m13.108s 00:10:58.794 sys 0m2.146s 00:10:58.794 08:53:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:58.794 ************************************ 00:10:58.794 END TEST dd_flags_misc_forced_aio 00:10:58.794 ************************************ 00:10:58.794 08:53:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:10:58.795 08:53:05 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:10:58.795 08:53:05 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:10:58.795 08:53:05 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:10:58.795 ************************************ 00:10:58.795 END TEST spdk_dd_posix 00:10:58.795 ************************************ 00:10:58.795 00:10:58.795 real 1m8.443s 00:10:58.795 user 0m53.384s 00:10:58.795 sys 0m18.204s 00:10:58.795 08:53:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:58.795 08:53:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:10:58.795 08:53:05 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:10:58.795 08:53:05 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:58.795 08:53:05 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:58.795 08:53:05 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:10:58.795 ************************************ 00:10:58.795 START TEST spdk_dd_malloc 00:10:58.795 ************************************ 00:10:58.795 08:53:05 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:10:58.795 * Looking for test storage... 00:10:58.795 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:10:58.795 08:53:05 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:58.795 08:53:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:58.795 08:53:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:58.795 08:53:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:58.795 08:53:05 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.795 08:53:05 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.795 08:53:05 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.795 08:53:05 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:10:58.795 08:53:05 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.795 08:53:05 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:10:58.795 08:53:05 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:58.795 08:53:05 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:58.795 08:53:05 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:10:58.795 ************************************ 00:10:58.795 START TEST dd_malloc_copy 00:10:58.795 ************************************ 00:10:58.795 08:53:05 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1125 -- # malloc_copy 00:10:58.795 08:53:05 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:10:58.795 08:53:05 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:10:58.795 08:53:05 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:10:58.795 08:53:05 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:10:58.795 08:53:05 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:10:58.795 08:53:05 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:10:58.795 08:53:05 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:10:58.795 08:53:05 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:10:58.795 08:53:05 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:10:58.795 08:53:05 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:10:58.795 { 00:10:58.795 "subsystems": [ 00:10:58.795 { 00:10:58.795 "subsystem": "bdev", 00:10:58.795 "config": [ 00:10:58.795 { 00:10:58.795 "params": { 00:10:58.795 "block_size": 512, 00:10:58.795 "num_blocks": 1048576, 00:10:58.795 "name": "malloc0" 00:10:58.795 }, 00:10:58.795 "method": "bdev_malloc_create" 00:10:58.795 }, 00:10:58.795 { 00:10:58.795 "params": { 00:10:58.795 "block_size": 512, 00:10:58.795 "num_blocks": 1048576, 00:10:58.795 "name": "malloc1" 00:10:58.795 }, 00:10:58.795 "method": "bdev_malloc_create" 00:10:58.795 }, 00:10:58.795 { 00:10:58.795 "method": "bdev_wait_for_examine" 00:10:58.795 } 00:10:58.795 ] 00:10:58.795 } 00:10:58.795 ] 00:10:58.795 } 00:10:58.795 [2024-07-25 08:53:05.869402] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:58.795 [2024-07-25 08:53:05.869578] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64980 ] 00:10:59.055 [2024-07-25 08:53:06.048014] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:59.313 [2024-07-25 08:53:06.329374] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:59.572 [2024-07-25 08:53:06.536843] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:08.350  Copying: 148/512 [MB] (148 MBps) Copying: 305/512 [MB] (156 MBps) Copying: 458/512 [MB] (153 MBps) Copying: 512/512 [MB] (average 152 MBps) 00:11:08.350 00:11:08.350 08:53:14 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:11:08.350 08:53:14 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:11:08.350 08:53:14 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:08.350 08:53:14 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:11:08.350 { 00:11:08.350 "subsystems": [ 00:11:08.350 { 00:11:08.350 "subsystem": "bdev", 00:11:08.350 "config": [ 00:11:08.350 { 00:11:08.350 "params": { 00:11:08.350 "block_size": 512, 00:11:08.350 "num_blocks": 1048576, 00:11:08.350 "name": "malloc0" 00:11:08.350 }, 00:11:08.350 "method": "bdev_malloc_create" 00:11:08.350 }, 00:11:08.350 { 00:11:08.350 "params": { 00:11:08.350 "block_size": 512, 00:11:08.350 "num_blocks": 1048576, 00:11:08.350 "name": "malloc1" 00:11:08.350 }, 00:11:08.350 "method": "bdev_malloc_create" 00:11:08.350 }, 00:11:08.350 { 00:11:08.350 "method": "bdev_wait_for_examine" 00:11:08.350 } 00:11:08.350 ] 00:11:08.350 } 00:11:08.350 ] 00:11:08.350 } 00:11:08.350 [2024-07-25 08:53:14.965159] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:11:08.350 [2024-07-25 08:53:14.965361] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65084 ] 00:11:08.350 [2024-07-25 08:53:15.142304] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:08.350 [2024-07-25 08:53:15.418823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.608 [2024-07-25 08:53:15.623046] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:17.364  Copying: 154/512 [MB] (154 MBps) Copying: 310/512 [MB] (156 MBps) Copying: 459/512 [MB] (149 MBps) Copying: 512/512 [MB] (average 153 MBps) 00:11:17.364 00:11:17.364 00:11:17.364 real 0m18.093s 00:11:17.364 user 0m16.626s 00:11:17.364 sys 0m1.243s 00:11:17.364 ************************************ 00:11:17.364 END TEST dd_malloc_copy 00:11:17.364 ************************************ 00:11:17.364 08:53:23 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:17.364 08:53:23 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:11:17.364 ************************************ 00:11:17.364 END TEST spdk_dd_malloc 00:11:17.364 ************************************ 00:11:17.364 00:11:17.364 real 0m18.227s 00:11:17.364 user 0m16.682s 00:11:17.364 sys 0m1.321s 00:11:17.364 08:53:23 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:17.364 08:53:23 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:11:17.364 08:53:23 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:11:17.364 08:53:23 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:17.364 08:53:23 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:17.364 08:53:23 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:11:17.364 ************************************ 00:11:17.364 START TEST spdk_dd_bdev_to_bdev 00:11:17.364 ************************************ 00:11:17.364 08:53:23 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:11:17.364 * Looking for test storage... 00:11:17.364 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:11:17.364 08:53:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:17.364 08:53:24 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:17.364 08:53:24 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:17.364 08:53:24 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:17.364 08:53:24 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.364 08:53:24 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.364 08:53:24 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.364 08:53:24 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:11:17.364 08:53:24 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.364 08:53:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:11:17.364 08:53:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:11:17.364 08:53:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:11:17.364 08:53:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:11:17.364 08:53:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:11:17.364 08:53:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:11:17.364 08:53:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:11:17.364 08:53:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:11:17.364 08:53:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:11:17.364 08:53:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:11:17.364 08:53:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:11:17.364 08:53:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:11:17.364 08:53:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:11:17.364 08:53:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:11:17.364 08:53:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:17.364 08:53:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:17.364 08:53:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:11:17.364 08:53:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:11:17.364 08:53:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:11:17.364 08:53:24 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:11:17.364 08:53:24 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:17.364 08:53:24 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:17.364 ************************************ 00:11:17.364 START TEST dd_inflate_file 00:11:17.364 ************************************ 00:11:17.365 08:53:24 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:11:17.365 [2024-07-25 08:53:24.122634] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:11:17.365 [2024-07-25 08:53:24.122803] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65250 ] 00:11:17.365 [2024-07-25 08:53:24.285897] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:17.622 [2024-07-25 08:53:24.540114] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:17.880 [2024-07-25 08:53:24.752877] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:19.252  Copying: 64/64 [MB] (average 1777 MBps) 00:11:19.252 00:11:19.252 00:11:19.252 real 0m2.069s 00:11:19.252 user 0m1.679s 00:11:19.252 sys 0m1.079s 00:11:19.252 08:53:26 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:19.252 ************************************ 00:11:19.252 08:53:26 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:11:19.252 END TEST dd_inflate_file 00:11:19.252 ************************************ 00:11:19.252 08:53:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:11:19.252 08:53:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:11:19.252 08:53:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:11:19.252 08:53:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:11:19.252 08:53:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:11:19.252 08:53:26 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:11:19.252 08:53:26 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:19.252 08:53:26 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:19.252 08:53:26 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:19.253 ************************************ 00:11:19.253 START TEST dd_copy_to_out_bdev 00:11:19.253 ************************************ 00:11:19.253 08:53:26 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:11:19.253 { 00:11:19.253 "subsystems": [ 00:11:19.253 { 00:11:19.253 "subsystem": "bdev", 00:11:19.253 "config": [ 00:11:19.253 { 00:11:19.253 "params": { 00:11:19.253 "trtype": "pcie", 00:11:19.253 "traddr": "0000:00:10.0", 00:11:19.253 "name": "Nvme0" 00:11:19.253 }, 00:11:19.253 "method": "bdev_nvme_attach_controller" 00:11:19.253 }, 00:11:19.253 { 00:11:19.253 "params": { 00:11:19.253 "trtype": "pcie", 00:11:19.253 "traddr": "0000:00:11.0", 00:11:19.253 "name": "Nvme1" 00:11:19.253 }, 00:11:19.253 "method": "bdev_nvme_attach_controller" 00:11:19.253 }, 00:11:19.253 { 00:11:19.253 "method": "bdev_wait_for_examine" 00:11:19.253 } 00:11:19.253 ] 00:11:19.253 } 00:11:19.253 ] 00:11:19.253 } 00:11:19.253 [2024-07-25 08:53:26.254115] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:11:19.253 [2024-07-25 08:53:26.254267] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65301 ] 00:11:19.510 [2024-07-25 08:53:26.418888] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:19.768 [2024-07-25 08:53:26.670235] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:19.768 [2024-07-25 08:53:26.882213] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:22.773  Copying: 53/64 [MB] (53 MBps) Copying: 64/64 [MB] (average 53 MBps) 00:11:22.774 00:11:22.774 00:11:22.774 real 0m3.403s 00:11:22.774 user 0m3.029s 00:11:22.774 sys 0m2.278s 00:11:22.774 ************************************ 00:11:22.774 END TEST dd_copy_to_out_bdev 00:11:22.774 ************************************ 00:11:22.774 08:53:29 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:22.774 08:53:29 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:22.774 08:53:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:11:22.774 08:53:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:11:22.774 08:53:29 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:22.774 08:53:29 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:22.774 08:53:29 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:22.774 ************************************ 00:11:22.774 START TEST dd_offset_magic 00:11:22.774 ************************************ 00:11:22.774 08:53:29 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1125 -- # offset_magic 00:11:22.774 08:53:29 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:11:22.774 08:53:29 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:11:22.774 08:53:29 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:11:22.774 08:53:29 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:11:22.774 08:53:29 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:11:22.774 08:53:29 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:11:22.774 08:53:29 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:11:22.774 08:53:29 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:11:22.774 { 00:11:22.774 "subsystems": [ 00:11:22.774 { 00:11:22.774 "subsystem": "bdev", 00:11:22.774 "config": [ 00:11:22.774 { 00:11:22.774 "params": { 00:11:22.774 "trtype": "pcie", 00:11:22.774 "traddr": "0000:00:10.0", 00:11:22.774 "name": "Nvme0" 00:11:22.774 }, 00:11:22.774 "method": "bdev_nvme_attach_controller" 00:11:22.774 }, 00:11:22.774 { 00:11:22.774 "params": { 00:11:22.774 "trtype": "pcie", 00:11:22.774 "traddr": "0000:00:11.0", 00:11:22.774 "name": "Nvme1" 00:11:22.774 }, 00:11:22.774 "method": "bdev_nvme_attach_controller" 00:11:22.774 }, 00:11:22.774 { 00:11:22.774 "method": "bdev_wait_for_examine" 00:11:22.774 } 00:11:22.774 ] 00:11:22.774 } 00:11:22.774 ] 00:11:22.774 } 00:11:22.774 [2024-07-25 08:53:29.726739] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:11:22.774 [2024-07-25 08:53:29.726915] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65364 ] 00:11:23.032 [2024-07-25 08:53:29.895970] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:23.290 [2024-07-25 08:53:30.159632] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.290 [2024-07-25 08:53:30.363918] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:24.790  Copying: 65/65 [MB] (average 1048 MBps) 00:11:24.790 00:11:24.790 08:53:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:11:24.791 08:53:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:11:24.791 08:53:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:11:24.791 08:53:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:11:24.791 { 00:11:24.791 "subsystems": [ 00:11:24.791 { 00:11:24.791 "subsystem": "bdev", 00:11:24.791 "config": [ 00:11:24.791 { 00:11:24.791 "params": { 00:11:24.791 "trtype": "pcie", 00:11:24.791 "traddr": "0000:00:10.0", 00:11:24.791 "name": "Nvme0" 00:11:24.791 }, 00:11:24.791 "method": "bdev_nvme_attach_controller" 00:11:24.791 }, 00:11:24.791 { 00:11:24.791 "params": { 00:11:24.791 "trtype": "pcie", 00:11:24.791 "traddr": "0000:00:11.0", 00:11:24.791 "name": "Nvme1" 00:11:24.791 }, 00:11:24.791 "method": "bdev_nvme_attach_controller" 00:11:24.791 }, 00:11:24.791 { 00:11:24.791 "method": "bdev_wait_for_examine" 00:11:24.791 } 00:11:24.791 ] 00:11:24.791 } 00:11:24.791 ] 00:11:24.791 } 00:11:24.791 [2024-07-25 08:53:31.796127] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:11:24.791 [2024-07-25 08:53:31.796288] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65396 ] 00:11:25.049 [2024-07-25 08:53:31.961961] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:25.308 [2024-07-25 08:53:32.183094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:25.308 [2024-07-25 08:53:32.376616] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:26.941  Copying: 1024/1024 [kB] (average 1000 MBps) 00:11:26.941 00:11:26.941 08:53:33 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:11:26.941 08:53:33 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:11:26.941 08:53:33 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:11:26.941 08:53:33 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:11:26.941 08:53:33 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:11:26.941 08:53:33 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:11:26.941 08:53:33 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:11:26.941 { 00:11:26.941 "subsystems": [ 00:11:26.941 { 00:11:26.941 "subsystem": "bdev", 00:11:26.941 "config": [ 00:11:26.941 { 00:11:26.941 "params": { 00:11:26.941 "trtype": "pcie", 00:11:26.941 "traddr": "0000:00:10.0", 00:11:26.941 "name": "Nvme0" 00:11:26.941 }, 00:11:26.941 "method": "bdev_nvme_attach_controller" 00:11:26.941 }, 00:11:26.941 { 00:11:26.941 "params": { 00:11:26.941 "trtype": "pcie", 00:11:26.941 "traddr": "0000:00:11.0", 00:11:26.941 "name": "Nvme1" 00:11:26.941 }, 00:11:26.941 "method": "bdev_nvme_attach_controller" 00:11:26.941 }, 00:11:26.941 { 00:11:26.941 "method": "bdev_wait_for_examine" 00:11:26.941 } 00:11:26.941 ] 00:11:26.941 } 00:11:26.941 ] 00:11:26.941 } 00:11:26.942 [2024-07-25 08:53:33.922047] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:11:26.942 [2024-07-25 08:53:33.922243] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65430 ] 00:11:27.199 [2024-07-25 08:53:34.092216] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:27.457 [2024-07-25 08:53:34.319591] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:27.457 [2024-07-25 08:53:34.516117] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:28.958  Copying: 65/65 [MB] (average 1048 MBps) 00:11:28.958 00:11:28.958 08:53:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:11:28.958 08:53:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:11:28.958 08:53:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:11:28.958 08:53:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:11:28.958 { 00:11:28.958 "subsystems": [ 00:11:28.958 { 00:11:28.958 "subsystem": "bdev", 00:11:28.958 "config": [ 00:11:28.958 { 00:11:28.958 "params": { 00:11:28.958 "trtype": "pcie", 00:11:28.958 "traddr": "0000:00:10.0", 00:11:28.958 "name": "Nvme0" 00:11:28.958 }, 00:11:28.958 "method": "bdev_nvme_attach_controller" 00:11:28.958 }, 00:11:28.958 { 00:11:28.958 "params": { 00:11:28.958 "trtype": "pcie", 00:11:28.958 "traddr": "0000:00:11.0", 00:11:28.958 "name": "Nvme1" 00:11:28.958 }, 00:11:28.958 "method": "bdev_nvme_attach_controller" 00:11:28.958 }, 00:11:28.958 { 00:11:28.958 "method": "bdev_wait_for_examine" 00:11:28.958 } 00:11:28.958 ] 00:11:28.958 } 00:11:28.958 ] 00:11:28.958 } 00:11:28.958 [2024-07-25 08:53:35.957678] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:11:28.958 [2024-07-25 08:53:35.957854] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65461 ] 00:11:29.216 [2024-07-25 08:53:36.120885] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:29.474 [2024-07-25 08:53:36.358333] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:29.474 [2024-07-25 08:53:36.557923] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:31.108  Copying: 1024/1024 [kB] (average 500 MBps) 00:11:31.108 00:11:31.108 08:53:37 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:11:31.108 08:53:37 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:11:31.108 00:11:31.108 real 0m8.380s 00:11:31.108 user 0m7.029s 00:11:31.108 sys 0m2.639s 00:11:31.108 08:53:37 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:31.108 ************************************ 00:11:31.108 END TEST dd_offset_magic 00:11:31.108 ************************************ 00:11:31.108 08:53:37 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:11:31.108 08:53:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:11:31.108 08:53:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:11:31.108 08:53:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:11:31.108 08:53:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:11:31.108 08:53:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:11:31.108 08:53:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:11:31.108 08:53:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:11:31.108 08:53:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:11:31.108 08:53:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:11:31.108 08:53:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:11:31.108 08:53:38 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:31.108 { 00:11:31.108 "subsystems": [ 00:11:31.108 { 00:11:31.108 "subsystem": "bdev", 00:11:31.108 "config": [ 00:11:31.108 { 00:11:31.108 "params": { 00:11:31.108 "trtype": "pcie", 00:11:31.108 "traddr": "0000:00:10.0", 00:11:31.108 "name": "Nvme0" 00:11:31.108 }, 00:11:31.108 "method": "bdev_nvme_attach_controller" 00:11:31.108 }, 00:11:31.108 { 00:11:31.108 "params": { 00:11:31.108 "trtype": "pcie", 00:11:31.108 "traddr": "0000:00:11.0", 00:11:31.108 "name": "Nvme1" 00:11:31.108 }, 00:11:31.108 "method": "bdev_nvme_attach_controller" 00:11:31.108 }, 00:11:31.108 { 00:11:31.108 "method": "bdev_wait_for_examine" 00:11:31.108 } 00:11:31.108 ] 00:11:31.108 } 00:11:31.108 ] 00:11:31.108 } 00:11:31.108 [2024-07-25 08:53:38.140971] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:11:31.108 [2024-07-25 08:53:38.141149] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65510 ] 00:11:31.367 [2024-07-25 08:53:38.309042] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:31.624 [2024-07-25 08:53:38.545484] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:31.882 [2024-07-25 08:53:38.744018] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:33.074  Copying: 5120/5120 [kB] (average 1250 MBps) 00:11:33.074 00:11:33.074 08:53:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:11:33.074 08:53:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:11:33.074 08:53:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:11:33.074 08:53:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:11:33.074 08:53:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:11:33.075 08:53:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:11:33.075 08:53:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:11:33.075 08:53:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:11:33.075 08:53:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:11:33.075 08:53:39 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:33.075 { 00:11:33.075 "subsystems": [ 00:11:33.075 { 00:11:33.075 "subsystem": "bdev", 00:11:33.075 "config": [ 00:11:33.075 { 00:11:33.075 "params": { 00:11:33.075 "trtype": "pcie", 00:11:33.075 "traddr": "0000:00:10.0", 00:11:33.075 "name": "Nvme0" 00:11:33.075 }, 00:11:33.075 "method": "bdev_nvme_attach_controller" 00:11:33.075 }, 00:11:33.075 { 00:11:33.075 "params": { 00:11:33.075 "trtype": "pcie", 00:11:33.075 "traddr": "0000:00:11.0", 00:11:33.075 "name": "Nvme1" 00:11:33.075 }, 00:11:33.075 "method": "bdev_nvme_attach_controller" 00:11:33.075 }, 00:11:33.075 { 00:11:33.075 "method": "bdev_wait_for_examine" 00:11:33.075 } 00:11:33.075 ] 00:11:33.075 } 00:11:33.075 ] 00:11:33.075 } 00:11:33.075 [2024-07-25 08:53:40.056177] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:11:33.075 [2024-07-25 08:53:40.056354] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65539 ] 00:11:33.333 [2024-07-25 08:53:40.226501] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:33.591 [2024-07-25 08:53:40.459468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:33.591 [2024-07-25 08:53:40.660290] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:35.223  Copying: 5120/5120 [kB] (average 833 MBps) 00:11:35.223 00:11:35.223 08:53:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:11:35.223 00:11:35.223 real 0m18.191s 00:11:35.223 user 0m15.228s 00:11:35.223 sys 0m8.010s 00:11:35.223 08:53:42 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:35.223 08:53:42 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:35.223 ************************************ 00:11:35.223 END TEST spdk_dd_bdev_to_bdev 00:11:35.223 ************************************ 00:11:35.223 08:53:42 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:11:35.223 08:53:42 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:11:35.223 08:53:42 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:35.223 08:53:42 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:35.223 08:53:42 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:11:35.223 ************************************ 00:11:35.223 START TEST spdk_dd_uring 00:11:35.223 ************************************ 00:11:35.223 08:53:42 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:11:35.223 * Looking for test storage... 00:11:35.223 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:11:35.223 08:53:42 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:35.223 08:53:42 spdk_dd.spdk_dd_uring -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:35.223 08:53:42 spdk_dd.spdk_dd_uring -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:35.223 08:53:42 spdk_dd.spdk_dd_uring -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:35.224 08:53:42 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.224 08:53:42 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.224 08:53:42 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.224 08:53:42 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:11:35.224 08:53:42 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.224 08:53:42 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:11:35.224 08:53:42 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:35.224 08:53:42 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:35.224 08:53:42 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:11:35.224 ************************************ 00:11:35.224 START TEST dd_uring_copy 00:11:35.224 ************************************ 00:11:35.224 08:53:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1125 -- # uring_zram_copy 00:11:35.224 08:53:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:11:35.224 08:53:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:11:35.224 08:53:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:11:35.224 08:53:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:11:35.224 08:53:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:11:35.224 08:53:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:11:35.224 08:53:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:11:35.224 08:53:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:11:35.224 08:53:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:11:35.224 08:53:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:11:35.224 08:53:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:11:35.224 08:53:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:11:35.224 08:53:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:11:35.224 08:53:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:11:35.224 08:53:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:11:35.224 08:53:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:11:35.224 08:53:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:11:35.224 08:53:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:11:35.224 08:53:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:11:35.224 08:53:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:11:35.224 08:53:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:11:35.224 08:53:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:11:35.224 08:53:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:11:35.224 08:53:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:11:35.224 08:53:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:11:35.224 08:53:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=qw9xelhxkdkgbjvzs5ing2xaed22fj89vqx4l1i3oshtxdav5ab65h93htivscrucd4smqdsm9srsx1wg07a8hmwpks047rr2lt5z1uokt9k6l38wl49zgcfzb0symuxq2rdt8j2dwbrat2ozvozbfliyimea5ynb30uwy1q4tsb3uhpn68rdzx268isuue1r6g24mve27ukokj45qk0f0fgka73rvexmlco282wjw0ruhd02jm5quiwgpfzsmn5y7uuo751oe928im6u3mlwsen7yiw69kmprbuuyktoeqekmv0v4vn9jgk15pjxiz4mmhn87z756419utzhc9cy98kstqupv1hu3d5tbeckp60cv2o9g5039lp21pwesq7qji6032va5ej21qmd1q7vcc8pgnh41wxodbnjc4qrpuj7e60d5rq232p4aui4oveyvmjikoyf7lmmmtue4o1vl4kxzbjyrp4embbv5s0bd7uosh8eep8390p2f25widej6s06ue1ivapijgfdb49ezzkshk8c3mjvriamxy4clwc8yeiscciid3r0pflqg039zoe8v2c0156ri390moipaf4tlbf1b040ug6cczy5zaznbj67npj2drlrebaepkyw9mxcmj1hns3w7uo3geusixing6mhzh2gsbtxapf979g4r49utpv3p1pichnbziqwl5me022cqso2huu59jbco8abqx7vlextf1lrcpodw30nhi4v7e4rea44km0xgqdrkwhogk046y3q0h3cs6pya96gbs8dma4vvvlgy6vtbnreqybk0dme94n2wc21bl6d37ydmumqlvbrmwaksuj4q6ihqb9wpnhgm7rckpe6b355agem0fm3nnr2tk7r7cg2sqre88dg8gvgs0ehwl51vuj34ne5sy13qbs8ajf1sk90k19gffdjwkwx9z4av4cstcsny584e73k0bg2zucswc1dk6lzqq2ipph1wqfq6sm02pjatj92gv3e9ci4lfi 00:11:35.224 08:53:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo qw9xelhxkdkgbjvzs5ing2xaed22fj89vqx4l1i3oshtxdav5ab65h93htivscrucd4smqdsm9srsx1wg07a8hmwpks047rr2lt5z1uokt9k6l38wl49zgcfzb0symuxq2rdt8j2dwbrat2ozvozbfliyimea5ynb30uwy1q4tsb3uhpn68rdzx268isuue1r6g24mve27ukokj45qk0f0fgka73rvexmlco282wjw0ruhd02jm5quiwgpfzsmn5y7uuo751oe928im6u3mlwsen7yiw69kmprbuuyktoeqekmv0v4vn9jgk15pjxiz4mmhn87z756419utzhc9cy98kstqupv1hu3d5tbeckp60cv2o9g5039lp21pwesq7qji6032va5ej21qmd1q7vcc8pgnh41wxodbnjc4qrpuj7e60d5rq232p4aui4oveyvmjikoyf7lmmmtue4o1vl4kxzbjyrp4embbv5s0bd7uosh8eep8390p2f25widej6s06ue1ivapijgfdb49ezzkshk8c3mjvriamxy4clwc8yeiscciid3r0pflqg039zoe8v2c0156ri390moipaf4tlbf1b040ug6cczy5zaznbj67npj2drlrebaepkyw9mxcmj1hns3w7uo3geusixing6mhzh2gsbtxapf979g4r49utpv3p1pichnbziqwl5me022cqso2huu59jbco8abqx7vlextf1lrcpodw30nhi4v7e4rea44km0xgqdrkwhogk046y3q0h3cs6pya96gbs8dma4vvvlgy6vtbnreqybk0dme94n2wc21bl6d37ydmumqlvbrmwaksuj4q6ihqb9wpnhgm7rckpe6b355agem0fm3nnr2tk7r7cg2sqre88dg8gvgs0ehwl51vuj34ne5sy13qbs8ajf1sk90k19gffdjwkwx9z4av4cstcsny584e73k0bg2zucswc1dk6lzqq2ipph1wqfq6sm02pjatj92gv3e9ci4lfi 00:11:35.224 08:53:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:11:35.482 [2024-07-25 08:53:42.391269] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:11:35.483 [2024-07-25 08:53:42.391433] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65621 ] 00:11:35.483 [2024-07-25 08:53:42.561345] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:35.742 [2024-07-25 08:53:42.799633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.000 [2024-07-25 08:53:42.997854] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:39.934  Copying: 511/511 [MB] (average 1193 MBps) 00:11:39.934 00:11:39.934 08:53:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:11:39.934 08:53:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:11:39.934 08:53:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:39.934 08:53:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:11:39.934 { 00:11:39.934 "subsystems": [ 00:11:39.934 { 00:11:39.934 "subsystem": "bdev", 00:11:39.934 "config": [ 00:11:39.934 { 00:11:39.934 "params": { 00:11:39.934 "block_size": 512, 00:11:39.934 "num_blocks": 1048576, 00:11:39.934 "name": "malloc0" 00:11:39.934 }, 00:11:39.934 "method": "bdev_malloc_create" 00:11:39.934 }, 00:11:39.934 { 00:11:39.934 "params": { 00:11:39.934 "filename": "/dev/zram1", 00:11:39.934 "name": "uring0" 00:11:39.934 }, 00:11:39.934 "method": "bdev_uring_create" 00:11:39.934 }, 00:11:39.934 { 00:11:39.934 "method": "bdev_wait_for_examine" 00:11:39.935 } 00:11:39.935 ] 00:11:39.935 } 00:11:39.935 ] 00:11:39.935 } 00:11:39.935 [2024-07-25 08:53:46.723743] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:11:39.935 [2024-07-25 08:53:46.724001] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65671 ] 00:11:39.935 [2024-07-25 08:53:46.904334] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:40.193 [2024-07-25 08:53:47.136568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:40.451 [2024-07-25 08:53:47.340432] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:46.441  Copying: 186/512 [MB] (186 MBps) Copying: 380/512 [MB] (193 MBps) Copying: 512/512 [MB] (average 189 MBps) 00:11:46.441 00:11:46.441 08:53:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:11:46.441 08:53:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:11:46.441 08:53:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:46.441 08:53:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:11:46.441 { 00:11:46.441 "subsystems": [ 00:11:46.441 { 00:11:46.441 "subsystem": "bdev", 00:11:46.441 "config": [ 00:11:46.441 { 00:11:46.441 "params": { 00:11:46.441 "block_size": 512, 00:11:46.441 "num_blocks": 1048576, 00:11:46.441 "name": "malloc0" 00:11:46.441 }, 00:11:46.441 "method": "bdev_malloc_create" 00:11:46.441 }, 00:11:46.441 { 00:11:46.441 "params": { 00:11:46.441 "filename": "/dev/zram1", 00:11:46.441 "name": "uring0" 00:11:46.441 }, 00:11:46.441 "method": "bdev_uring_create" 00:11:46.441 }, 00:11:46.441 { 00:11:46.441 "method": "bdev_wait_for_examine" 00:11:46.441 } 00:11:46.441 ] 00:11:46.441 } 00:11:46.441 ] 00:11:46.441 } 00:11:46.441 [2024-07-25 08:53:53.315672] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:11:46.441 [2024-07-25 08:53:53.315926] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65755 ] 00:11:46.441 [2024-07-25 08:53:53.493177] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:46.699 [2024-07-25 08:53:53.729830] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:46.957 [2024-07-25 08:53:53.932388] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:53.867  Copying: 149/512 [MB] (149 MBps) Copying: 289/512 [MB] (140 MBps) Copying: 440/512 [MB] (150 MBps) Copying: 512/512 [MB] (average 140 MBps) 00:11:53.867 00:11:53.867 08:54:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:11:53.867 08:54:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ qw9xelhxkdkgbjvzs5ing2xaed22fj89vqx4l1i3oshtxdav5ab65h93htivscrucd4smqdsm9srsx1wg07a8hmwpks047rr2lt5z1uokt9k6l38wl49zgcfzb0symuxq2rdt8j2dwbrat2ozvozbfliyimea5ynb30uwy1q4tsb3uhpn68rdzx268isuue1r6g24mve27ukokj45qk0f0fgka73rvexmlco282wjw0ruhd02jm5quiwgpfzsmn5y7uuo751oe928im6u3mlwsen7yiw69kmprbuuyktoeqekmv0v4vn9jgk15pjxiz4mmhn87z756419utzhc9cy98kstqupv1hu3d5tbeckp60cv2o9g5039lp21pwesq7qji6032va5ej21qmd1q7vcc8pgnh41wxodbnjc4qrpuj7e60d5rq232p4aui4oveyvmjikoyf7lmmmtue4o1vl4kxzbjyrp4embbv5s0bd7uosh8eep8390p2f25widej6s06ue1ivapijgfdb49ezzkshk8c3mjvriamxy4clwc8yeiscciid3r0pflqg039zoe8v2c0156ri390moipaf4tlbf1b040ug6cczy5zaznbj67npj2drlrebaepkyw9mxcmj1hns3w7uo3geusixing6mhzh2gsbtxapf979g4r49utpv3p1pichnbziqwl5me022cqso2huu59jbco8abqx7vlextf1lrcpodw30nhi4v7e4rea44km0xgqdrkwhogk046y3q0h3cs6pya96gbs8dma4vvvlgy6vtbnreqybk0dme94n2wc21bl6d37ydmumqlvbrmwaksuj4q6ihqb9wpnhgm7rckpe6b355agem0fm3nnr2tk7r7cg2sqre88dg8gvgs0ehwl51vuj34ne5sy13qbs8ajf1sk90k19gffdjwkwx9z4av4cstcsny584e73k0bg2zucswc1dk6lzqq2ipph1wqfq6sm02pjatj92gv3e9ci4lfi == \q\w\9\x\e\l\h\x\k\d\k\g\b\j\v\z\s\5\i\n\g\2\x\a\e\d\2\2\f\j\8\9\v\q\x\4\l\1\i\3\o\s\h\t\x\d\a\v\5\a\b\6\5\h\9\3\h\t\i\v\s\c\r\u\c\d\4\s\m\q\d\s\m\9\s\r\s\x\1\w\g\0\7\a\8\h\m\w\p\k\s\0\4\7\r\r\2\l\t\5\z\1\u\o\k\t\9\k\6\l\3\8\w\l\4\9\z\g\c\f\z\b\0\s\y\m\u\x\q\2\r\d\t\8\j\2\d\w\b\r\a\t\2\o\z\v\o\z\b\f\l\i\y\i\m\e\a\5\y\n\b\3\0\u\w\y\1\q\4\t\s\b\3\u\h\p\n\6\8\r\d\z\x\2\6\8\i\s\u\u\e\1\r\6\g\2\4\m\v\e\2\7\u\k\o\k\j\4\5\q\k\0\f\0\f\g\k\a\7\3\r\v\e\x\m\l\c\o\2\8\2\w\j\w\0\r\u\h\d\0\2\j\m\5\q\u\i\w\g\p\f\z\s\m\n\5\y\7\u\u\o\7\5\1\o\e\9\2\8\i\m\6\u\3\m\l\w\s\e\n\7\y\i\w\6\9\k\m\p\r\b\u\u\y\k\t\o\e\q\e\k\m\v\0\v\4\v\n\9\j\g\k\1\5\p\j\x\i\z\4\m\m\h\n\8\7\z\7\5\6\4\1\9\u\t\z\h\c\9\c\y\9\8\k\s\t\q\u\p\v\1\h\u\3\d\5\t\b\e\c\k\p\6\0\c\v\2\o\9\g\5\0\3\9\l\p\2\1\p\w\e\s\q\7\q\j\i\6\0\3\2\v\a\5\e\j\2\1\q\m\d\1\q\7\v\c\c\8\p\g\n\h\4\1\w\x\o\d\b\n\j\c\4\q\r\p\u\j\7\e\6\0\d\5\r\q\2\3\2\p\4\a\u\i\4\o\v\e\y\v\m\j\i\k\o\y\f\7\l\m\m\m\t\u\e\4\o\1\v\l\4\k\x\z\b\j\y\r\p\4\e\m\b\b\v\5\s\0\b\d\7\u\o\s\h\8\e\e\p\8\3\9\0\p\2\f\2\5\w\i\d\e\j\6\s\0\6\u\e\1\i\v\a\p\i\j\g\f\d\b\4\9\e\z\z\k\s\h\k\8\c\3\m\j\v\r\i\a\m\x\y\4\c\l\w\c\8\y\e\i\s\c\c\i\i\d\3\r\0\p\f\l\q\g\0\3\9\z\o\e\8\v\2\c\0\1\5\6\r\i\3\9\0\m\o\i\p\a\f\4\t\l\b\f\1\b\0\4\0\u\g\6\c\c\z\y\5\z\a\z\n\b\j\6\7\n\p\j\2\d\r\l\r\e\b\a\e\p\k\y\w\9\m\x\c\m\j\1\h\n\s\3\w\7\u\o\3\g\e\u\s\i\x\i\n\g\6\m\h\z\h\2\g\s\b\t\x\a\p\f\9\7\9\g\4\r\4\9\u\t\p\v\3\p\1\p\i\c\h\n\b\z\i\q\w\l\5\m\e\0\2\2\c\q\s\o\2\h\u\u\5\9\j\b\c\o\8\a\b\q\x\7\v\l\e\x\t\f\1\l\r\c\p\o\d\w\3\0\n\h\i\4\v\7\e\4\r\e\a\4\4\k\m\0\x\g\q\d\r\k\w\h\o\g\k\0\4\6\y\3\q\0\h\3\c\s\6\p\y\a\9\6\g\b\s\8\d\m\a\4\v\v\v\l\g\y\6\v\t\b\n\r\e\q\y\b\k\0\d\m\e\9\4\n\2\w\c\2\1\b\l\6\d\3\7\y\d\m\u\m\q\l\v\b\r\m\w\a\k\s\u\j\4\q\6\i\h\q\b\9\w\p\n\h\g\m\7\r\c\k\p\e\6\b\3\5\5\a\g\e\m\0\f\m\3\n\n\r\2\t\k\7\r\7\c\g\2\s\q\r\e\8\8\d\g\8\g\v\g\s\0\e\h\w\l\5\1\v\u\j\3\4\n\e\5\s\y\1\3\q\b\s\8\a\j\f\1\s\k\9\0\k\1\9\g\f\f\d\j\w\k\w\x\9\z\4\a\v\4\c\s\t\c\s\n\y\5\8\4\e\7\3\k\0\b\g\2\z\u\c\s\w\c\1\d\k\6\l\z\q\q\2\i\p\p\h\1\w\q\f\q\6\s\m\0\2\p\j\a\t\j\9\2\g\v\3\e\9\c\i\4\l\f\i ]] 00:11:53.867 08:54:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:11:53.867 08:54:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ qw9xelhxkdkgbjvzs5ing2xaed22fj89vqx4l1i3oshtxdav5ab65h93htivscrucd4smqdsm9srsx1wg07a8hmwpks047rr2lt5z1uokt9k6l38wl49zgcfzb0symuxq2rdt8j2dwbrat2ozvozbfliyimea5ynb30uwy1q4tsb3uhpn68rdzx268isuue1r6g24mve27ukokj45qk0f0fgka73rvexmlco282wjw0ruhd02jm5quiwgpfzsmn5y7uuo751oe928im6u3mlwsen7yiw69kmprbuuyktoeqekmv0v4vn9jgk15pjxiz4mmhn87z756419utzhc9cy98kstqupv1hu3d5tbeckp60cv2o9g5039lp21pwesq7qji6032va5ej21qmd1q7vcc8pgnh41wxodbnjc4qrpuj7e60d5rq232p4aui4oveyvmjikoyf7lmmmtue4o1vl4kxzbjyrp4embbv5s0bd7uosh8eep8390p2f25widej6s06ue1ivapijgfdb49ezzkshk8c3mjvriamxy4clwc8yeiscciid3r0pflqg039zoe8v2c0156ri390moipaf4tlbf1b040ug6cczy5zaznbj67npj2drlrebaepkyw9mxcmj1hns3w7uo3geusixing6mhzh2gsbtxapf979g4r49utpv3p1pichnbziqwl5me022cqso2huu59jbco8abqx7vlextf1lrcpodw30nhi4v7e4rea44km0xgqdrkwhogk046y3q0h3cs6pya96gbs8dma4vvvlgy6vtbnreqybk0dme94n2wc21bl6d37ydmumqlvbrmwaksuj4q6ihqb9wpnhgm7rckpe6b355agem0fm3nnr2tk7r7cg2sqre88dg8gvgs0ehwl51vuj34ne5sy13qbs8ajf1sk90k19gffdjwkwx9z4av4cstcsny584e73k0bg2zucswc1dk6lzqq2ipph1wqfq6sm02pjatj92gv3e9ci4lfi == \q\w\9\x\e\l\h\x\k\d\k\g\b\j\v\z\s\5\i\n\g\2\x\a\e\d\2\2\f\j\8\9\v\q\x\4\l\1\i\3\o\s\h\t\x\d\a\v\5\a\b\6\5\h\9\3\h\t\i\v\s\c\r\u\c\d\4\s\m\q\d\s\m\9\s\r\s\x\1\w\g\0\7\a\8\h\m\w\p\k\s\0\4\7\r\r\2\l\t\5\z\1\u\o\k\t\9\k\6\l\3\8\w\l\4\9\z\g\c\f\z\b\0\s\y\m\u\x\q\2\r\d\t\8\j\2\d\w\b\r\a\t\2\o\z\v\o\z\b\f\l\i\y\i\m\e\a\5\y\n\b\3\0\u\w\y\1\q\4\t\s\b\3\u\h\p\n\6\8\r\d\z\x\2\6\8\i\s\u\u\e\1\r\6\g\2\4\m\v\e\2\7\u\k\o\k\j\4\5\q\k\0\f\0\f\g\k\a\7\3\r\v\e\x\m\l\c\o\2\8\2\w\j\w\0\r\u\h\d\0\2\j\m\5\q\u\i\w\g\p\f\z\s\m\n\5\y\7\u\u\o\7\5\1\o\e\9\2\8\i\m\6\u\3\m\l\w\s\e\n\7\y\i\w\6\9\k\m\p\r\b\u\u\y\k\t\o\e\q\e\k\m\v\0\v\4\v\n\9\j\g\k\1\5\p\j\x\i\z\4\m\m\h\n\8\7\z\7\5\6\4\1\9\u\t\z\h\c\9\c\y\9\8\k\s\t\q\u\p\v\1\h\u\3\d\5\t\b\e\c\k\p\6\0\c\v\2\o\9\g\5\0\3\9\l\p\2\1\p\w\e\s\q\7\q\j\i\6\0\3\2\v\a\5\e\j\2\1\q\m\d\1\q\7\v\c\c\8\p\g\n\h\4\1\w\x\o\d\b\n\j\c\4\q\r\p\u\j\7\e\6\0\d\5\r\q\2\3\2\p\4\a\u\i\4\o\v\e\y\v\m\j\i\k\o\y\f\7\l\m\m\m\t\u\e\4\o\1\v\l\4\k\x\z\b\j\y\r\p\4\e\m\b\b\v\5\s\0\b\d\7\u\o\s\h\8\e\e\p\8\3\9\0\p\2\f\2\5\w\i\d\e\j\6\s\0\6\u\e\1\i\v\a\p\i\j\g\f\d\b\4\9\e\z\z\k\s\h\k\8\c\3\m\j\v\r\i\a\m\x\y\4\c\l\w\c\8\y\e\i\s\c\c\i\i\d\3\r\0\p\f\l\q\g\0\3\9\z\o\e\8\v\2\c\0\1\5\6\r\i\3\9\0\m\o\i\p\a\f\4\t\l\b\f\1\b\0\4\0\u\g\6\c\c\z\y\5\z\a\z\n\b\j\6\7\n\p\j\2\d\r\l\r\e\b\a\e\p\k\y\w\9\m\x\c\m\j\1\h\n\s\3\w\7\u\o\3\g\e\u\s\i\x\i\n\g\6\m\h\z\h\2\g\s\b\t\x\a\p\f\9\7\9\g\4\r\4\9\u\t\p\v\3\p\1\p\i\c\h\n\b\z\i\q\w\l\5\m\e\0\2\2\c\q\s\o\2\h\u\u\5\9\j\b\c\o\8\a\b\q\x\7\v\l\e\x\t\f\1\l\r\c\p\o\d\w\3\0\n\h\i\4\v\7\e\4\r\e\a\4\4\k\m\0\x\g\q\d\r\k\w\h\o\g\k\0\4\6\y\3\q\0\h\3\c\s\6\p\y\a\9\6\g\b\s\8\d\m\a\4\v\v\v\l\g\y\6\v\t\b\n\r\e\q\y\b\k\0\d\m\e\9\4\n\2\w\c\2\1\b\l\6\d\3\7\y\d\m\u\m\q\l\v\b\r\m\w\a\k\s\u\j\4\q\6\i\h\q\b\9\w\p\n\h\g\m\7\r\c\k\p\e\6\b\3\5\5\a\g\e\m\0\f\m\3\n\n\r\2\t\k\7\r\7\c\g\2\s\q\r\e\8\8\d\g\8\g\v\g\s\0\e\h\w\l\5\1\v\u\j\3\4\n\e\5\s\y\1\3\q\b\s\8\a\j\f\1\s\k\9\0\k\1\9\g\f\f\d\j\w\k\w\x\9\z\4\a\v\4\c\s\t\c\s\n\y\5\8\4\e\7\3\k\0\b\g\2\z\u\c\s\w\c\1\d\k\6\l\z\q\q\2\i\p\p\h\1\w\q\f\q\6\s\m\0\2\p\j\a\t\j\9\2\g\v\3\e\9\c\i\4\l\f\i ]] 00:11:53.867 08:54:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:11:54.124 08:54:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:11:54.124 08:54:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:11:54.124 08:54:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:54.124 08:54:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:11:54.124 { 00:11:54.124 "subsystems": [ 00:11:54.124 { 00:11:54.124 "subsystem": "bdev", 00:11:54.124 "config": [ 00:11:54.124 { 00:11:54.124 "params": { 00:11:54.124 "block_size": 512, 00:11:54.124 "num_blocks": 1048576, 00:11:54.124 "name": "malloc0" 00:11:54.124 }, 00:11:54.124 "method": "bdev_malloc_create" 00:11:54.124 }, 00:11:54.124 { 00:11:54.124 "params": { 00:11:54.124 "filename": "/dev/zram1", 00:11:54.124 "name": "uring0" 00:11:54.124 }, 00:11:54.124 "method": "bdev_uring_create" 00:11:54.124 }, 00:11:54.124 { 00:11:54.124 "method": "bdev_wait_for_examine" 00:11:54.124 } 00:11:54.124 ] 00:11:54.124 } 00:11:54.124 ] 00:11:54.124 } 00:11:54.124 [2024-07-25 08:54:01.149421] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:11:54.124 [2024-07-25 08:54:01.149578] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65885 ] 00:11:54.381 [2024-07-25 08:54:01.315932] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:54.638 [2024-07-25 08:54:01.610787] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:54.896 [2024-07-25 08:54:01.815416] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:02.390  Copying: 123/512 [MB] (123 MBps) Copying: 245/512 [MB] (122 MBps) Copying: 366/512 [MB] (120 MBps) Copying: 484/512 [MB] (117 MBps) Copying: 512/512 [MB] (average 120 MBps) 00:12:02.390 00:12:02.390 08:54:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:12:02.390 08:54:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:12:02.390 08:54:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:12:02.390 08:54:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:12:02.390 08:54:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:12:02.390 08:54:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:02.390 08:54:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:12:02.390 08:54:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:12:02.390 { 00:12:02.390 "subsystems": [ 00:12:02.390 { 00:12:02.390 "subsystem": "bdev", 00:12:02.390 "config": [ 00:12:02.390 { 00:12:02.390 "params": { 00:12:02.390 "block_size": 512, 00:12:02.390 "num_blocks": 1048576, 00:12:02.390 "name": "malloc0" 00:12:02.390 }, 00:12:02.390 "method": "bdev_malloc_create" 00:12:02.391 }, 00:12:02.391 { 00:12:02.391 "params": { 00:12:02.391 "filename": "/dev/zram1", 00:12:02.391 "name": "uring0" 00:12:02.391 }, 00:12:02.391 "method": "bdev_uring_create" 00:12:02.391 }, 00:12:02.391 { 00:12:02.391 "params": { 00:12:02.391 "name": "uring0" 00:12:02.391 }, 00:12:02.391 "method": "bdev_uring_delete" 00:12:02.391 }, 00:12:02.391 { 00:12:02.391 "method": "bdev_wait_for_examine" 00:12:02.391 } 00:12:02.391 ] 00:12:02.391 } 00:12:02.391 ] 00:12:02.391 } 00:12:02.391 [2024-07-25 08:54:09.280831] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:02.391 [2024-07-25 08:54:09.281031] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65986 ] 00:12:02.391 [2024-07-25 08:54:09.453212] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:02.649 [2024-07-25 08:54:09.688808] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.907 [2024-07-25 08:54:09.894390] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:06.002  Copying: 0/0 [B] (average 0 Bps) 00:12:06.002 00:12:06.002 08:54:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:12:06.002 08:54:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:12:06.002 08:54:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@650 -- # local es=0 00:12:06.002 08:54:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:12:06.002 08:54:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:12:06.002 08:54:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:06.002 08:54:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:06.002 08:54:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:12:06.002 08:54:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:06.002 08:54:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:06.002 08:54:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:06.002 08:54:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:06.002 08:54:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:06.002 08:54:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:06.002 08:54:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:06.002 08:54:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:12:06.002 { 00:12:06.002 "subsystems": [ 00:12:06.002 { 00:12:06.002 "subsystem": "bdev", 00:12:06.002 "config": [ 00:12:06.002 { 00:12:06.002 "params": { 00:12:06.002 "block_size": 512, 00:12:06.002 "num_blocks": 1048576, 00:12:06.002 "name": "malloc0" 00:12:06.002 }, 00:12:06.002 "method": "bdev_malloc_create" 00:12:06.002 }, 00:12:06.002 { 00:12:06.002 "params": { 00:12:06.002 "filename": "/dev/zram1", 00:12:06.002 "name": "uring0" 00:12:06.002 }, 00:12:06.002 "method": "bdev_uring_create" 00:12:06.002 }, 00:12:06.002 { 00:12:06.002 "params": { 00:12:06.002 "name": "uring0" 00:12:06.002 }, 00:12:06.002 "method": "bdev_uring_delete" 00:12:06.002 }, 00:12:06.002 { 00:12:06.002 "method": "bdev_wait_for_examine" 00:12:06.002 } 00:12:06.002 ] 00:12:06.002 } 00:12:06.002 ] 00:12:06.002 } 00:12:06.002 [2024-07-25 08:54:13.055962] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:06.002 [2024-07-25 08:54:13.056121] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66044 ] 00:12:06.260 [2024-07-25 08:54:13.218009] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:06.518 [2024-07-25 08:54:13.451672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:06.775 [2024-07-25 08:54:13.655442] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:07.341 [2024-07-25 08:54:14.299960] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:12:07.341 [2024-07-25 08:54:14.300022] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:12:07.341 [2024-07-25 08:54:14.300043] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:12:07.341 [2024-07-25 08:54:14.300063] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:09.240 [2024-07-25 08:54:16.261337] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:12:09.806 08:54:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@653 -- # es=237 00:12:09.806 08:54:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:09.806 08:54:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@662 -- # es=109 00:12:09.806 08:54:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # case "$es" in 00:12:09.806 08:54:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@670 -- # es=1 00:12:09.806 08:54:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:09.806 08:54:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:12:09.806 08:54:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:12:09.806 08:54:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:12:09.806 08:54:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:12:09.806 08:54:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:12:09.806 08:54:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:12:10.065 00:12:10.065 real 0m34.687s 00:12:10.065 user 0m28.388s 00:12:10.065 sys 0m16.804s 00:12:10.065 08:54:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:10.065 ************************************ 00:12:10.065 END TEST dd_uring_copy 00:12:10.065 ************************************ 00:12:10.065 08:54:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:12:10.065 00:12:10.065 real 0m34.822s 00:12:10.065 user 0m28.436s 00:12:10.065 sys 0m16.889s 00:12:10.065 08:54:16 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:10.065 08:54:16 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:12:10.065 ************************************ 00:12:10.065 END TEST spdk_dd_uring 00:12:10.065 ************************************ 00:12:10.065 08:54:17 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:12:10.065 08:54:17 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:10.065 08:54:17 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:10.065 08:54:17 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:12:10.065 ************************************ 00:12:10.065 START TEST spdk_dd_sparse 00:12:10.065 ************************************ 00:12:10.065 08:54:17 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:12:10.065 * Looking for test storage... 00:12:10.065 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:12:10.065 08:54:17 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:10.065 08:54:17 spdk_dd.spdk_dd_sparse -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:10.065 08:54:17 spdk_dd.spdk_dd_sparse -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:10.065 08:54:17 spdk_dd.spdk_dd_sparse -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:10.065 08:54:17 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.065 08:54:17 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.065 08:54:17 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.065 08:54:17 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:12:10.066 08:54:17 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.066 08:54:17 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:12:10.066 08:54:17 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:12:10.066 08:54:17 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:12:10.066 08:54:17 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:12:10.066 08:54:17 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:12:10.066 08:54:17 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:12:10.066 08:54:17 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:12:10.066 08:54:17 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:12:10.066 08:54:17 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:12:10.066 08:54:17 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:12:10.066 08:54:17 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:12:10.066 1+0 records in 00:12:10.066 1+0 records out 00:12:10.066 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00680119 s, 617 MB/s 00:12:10.066 08:54:17 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:12:10.066 1+0 records in 00:12:10.066 1+0 records out 00:12:10.066 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00646528 s, 649 MB/s 00:12:10.066 08:54:17 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:12:10.066 1+0 records in 00:12:10.066 1+0 records out 00:12:10.066 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00465138 s, 902 MB/s 00:12:10.066 08:54:17 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:12:10.066 08:54:17 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:10.066 08:54:17 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:10.066 08:54:17 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:12:10.324 ************************************ 00:12:10.324 START TEST dd_sparse_file_to_file 00:12:10.324 ************************************ 00:12:10.324 08:54:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1125 -- # file_to_file 00:12:10.324 08:54:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:12:10.324 08:54:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:12:10.324 08:54:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:12:10.324 08:54:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:12:10.324 08:54:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:12:10.324 08:54:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:12:10.324 08:54:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:12:10.324 08:54:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:12:10.324 08:54:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:12:10.324 08:54:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:12:10.324 { 00:12:10.324 "subsystems": [ 00:12:10.324 { 00:12:10.324 "subsystem": "bdev", 00:12:10.324 "config": [ 00:12:10.324 { 00:12:10.324 "params": { 00:12:10.324 "block_size": 4096, 00:12:10.324 "filename": "dd_sparse_aio_disk", 00:12:10.324 "name": "dd_aio" 00:12:10.324 }, 00:12:10.324 "method": "bdev_aio_create" 00:12:10.324 }, 00:12:10.324 { 00:12:10.324 "params": { 00:12:10.324 "lvs_name": "dd_lvstore", 00:12:10.324 "bdev_name": "dd_aio" 00:12:10.324 }, 00:12:10.324 "method": "bdev_lvol_create_lvstore" 00:12:10.324 }, 00:12:10.324 { 00:12:10.324 "method": "bdev_wait_for_examine" 00:12:10.324 } 00:12:10.324 ] 00:12:10.324 } 00:12:10.324 ] 00:12:10.324 } 00:12:10.324 [2024-07-25 08:54:17.295754] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:10.324 [2024-07-25 08:54:17.296876] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66164 ] 00:12:10.583 [2024-07-25 08:54:17.475481] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:10.841 [2024-07-25 08:54:17.765809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:11.102 [2024-07-25 08:54:17.971783] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:12.478  Copying: 12/36 [MB] (average 800 MBps) 00:12:12.478 00:12:12.478 08:54:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:12:12.478 08:54:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:12:12.478 08:54:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:12:12.478 08:54:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:12:12.478 08:54:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:12:12.478 08:54:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:12:12.478 08:54:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:12:12.478 08:54:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:12:12.479 08:54:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:12:12.479 08:54:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:12:12.479 00:12:12.479 real 0m2.266s 00:12:12.479 user 0m1.854s 00:12:12.479 sys 0m1.113s 00:12:12.479 08:54:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:12.479 08:54:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:12:12.479 ************************************ 00:12:12.479 END TEST dd_sparse_file_to_file 00:12:12.479 ************************************ 00:12:12.479 08:54:19 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:12:12.479 08:54:19 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:12.479 08:54:19 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:12.479 08:54:19 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:12:12.479 ************************************ 00:12:12.479 START TEST dd_sparse_file_to_bdev 00:12:12.479 ************************************ 00:12:12.479 08:54:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1125 -- # file_to_bdev 00:12:12.479 08:54:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:12:12.479 08:54:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:12:12.479 08:54:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:12:12.479 08:54:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:12:12.479 08:54:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:12:12.479 08:54:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:12:12.479 08:54:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:12:12.479 08:54:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:12:12.479 { 00:12:12.479 "subsystems": [ 00:12:12.479 { 00:12:12.479 "subsystem": "bdev", 00:12:12.479 "config": [ 00:12:12.479 { 00:12:12.479 "params": { 00:12:12.479 "block_size": 4096, 00:12:12.479 "filename": "dd_sparse_aio_disk", 00:12:12.479 "name": "dd_aio" 00:12:12.479 }, 00:12:12.479 "method": "bdev_aio_create" 00:12:12.479 }, 00:12:12.479 { 00:12:12.479 "params": { 00:12:12.479 "lvs_name": "dd_lvstore", 00:12:12.479 "lvol_name": "dd_lvol", 00:12:12.479 "size_in_mib": 36, 00:12:12.479 "thin_provision": true 00:12:12.479 }, 00:12:12.479 "method": "bdev_lvol_create" 00:12:12.479 }, 00:12:12.479 { 00:12:12.479 "method": "bdev_wait_for_examine" 00:12:12.479 } 00:12:12.479 ] 00:12:12.479 } 00:12:12.479 ] 00:12:12.479 } 00:12:12.738 [2024-07-25 08:54:19.602338] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:12.738 [2024-07-25 08:54:19.602527] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66230 ] 00:12:12.738 [2024-07-25 08:54:19.778255] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:12.996 [2024-07-25 08:54:20.016764] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:13.255 [2024-07-25 08:54:20.220978] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:14.893  Copying: 12/36 [MB] (average 521 MBps) 00:12:14.893 00:12:14.893 00:12:14.893 real 0m2.169s 00:12:14.893 user 0m1.810s 00:12:14.893 sys 0m1.071s 00:12:14.893 08:54:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:14.893 ************************************ 00:12:14.893 END TEST dd_sparse_file_to_bdev 00:12:14.893 ************************************ 00:12:14.893 08:54:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:12:14.893 08:54:21 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:12:14.893 08:54:21 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:14.893 08:54:21 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:14.893 08:54:21 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:12:14.893 ************************************ 00:12:14.893 START TEST dd_sparse_bdev_to_file 00:12:14.893 ************************************ 00:12:14.893 08:54:21 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1125 -- # bdev_to_file 00:12:14.893 08:54:21 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:12:14.893 08:54:21 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:12:14.893 08:54:21 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:12:14.893 08:54:21 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:12:14.894 08:54:21 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:12:14.894 08:54:21 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:12:14.894 08:54:21 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:12:14.894 08:54:21 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:12:14.894 { 00:12:14.894 "subsystems": [ 00:12:14.894 { 00:12:14.894 "subsystem": "bdev", 00:12:14.894 "config": [ 00:12:14.894 { 00:12:14.894 "params": { 00:12:14.894 "block_size": 4096, 00:12:14.894 "filename": "dd_sparse_aio_disk", 00:12:14.894 "name": "dd_aio" 00:12:14.894 }, 00:12:14.894 "method": "bdev_aio_create" 00:12:14.894 }, 00:12:14.894 { 00:12:14.894 "method": "bdev_wait_for_examine" 00:12:14.894 } 00:12:14.894 ] 00:12:14.894 } 00:12:14.894 ] 00:12:14.894 } 00:12:14.894 [2024-07-25 08:54:21.822210] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:14.894 [2024-07-25 08:54:21.822384] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66281 ] 00:12:14.894 [2024-07-25 08:54:22.000660] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:15.153 [2024-07-25 08:54:22.238347] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.411 [2024-07-25 08:54:22.439451] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:17.062  Copying: 12/36 [MB] (average 1000 MBps) 00:12:17.062 00:12:17.062 08:54:23 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:12:17.062 08:54:23 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:12:17.062 08:54:23 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:12:17.062 08:54:23 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:12:17.062 08:54:23 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:12:17.062 08:54:23 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:12:17.062 08:54:23 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:12:17.062 08:54:23 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:12:17.062 08:54:23 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:12:17.062 08:54:23 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:12:17.062 00:12:17.062 real 0m2.214s 00:12:17.062 user 0m1.846s 00:12:17.062 sys 0m1.115s 00:12:17.062 ************************************ 00:12:17.062 END TEST dd_sparse_bdev_to_file 00:12:17.062 ************************************ 00:12:17.062 08:54:23 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:17.062 08:54:23 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:12:17.062 08:54:23 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:12:17.062 08:54:23 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:12:17.062 08:54:23 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:12:17.062 08:54:23 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:12:17.062 08:54:23 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:12:17.062 ************************************ 00:12:17.062 END TEST spdk_dd_sparse 00:12:17.062 ************************************ 00:12:17.062 00:12:17.062 real 0m6.938s 00:12:17.062 user 0m5.609s 00:12:17.062 sys 0m3.473s 00:12:17.063 08:54:23 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:17.063 08:54:23 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:12:17.063 08:54:24 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:12:17.063 08:54:24 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:17.063 08:54:24 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:17.063 08:54:24 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:12:17.063 ************************************ 00:12:17.063 START TEST spdk_dd_negative 00:12:17.063 ************************************ 00:12:17.063 08:54:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:12:17.063 * Looking for test storage... 00:12:17.063 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:12:17.063 08:54:24 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:17.063 08:54:24 spdk_dd.spdk_dd_negative -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:17.063 08:54:24 spdk_dd.spdk_dd_negative -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:17.063 08:54:24 spdk_dd.spdk_dd_negative -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:17.063 08:54:24 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.063 08:54:24 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.063 08:54:24 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.063 08:54:24 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:12:17.063 08:54:24 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.063 08:54:24 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:12:17.063 08:54:24 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:12:17.063 08:54:24 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:12:17.063 08:54:24 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:12:17.063 08:54:24 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:12:17.063 08:54:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:17.063 08:54:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:17.063 08:54:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:17.063 ************************************ 00:12:17.063 START TEST dd_invalid_arguments 00:12:17.063 ************************************ 00:12:17.063 08:54:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1125 -- # invalid_arguments 00:12:17.063 08:54:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:12:17.063 08:54:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@650 -- # local es=0 00:12:17.063 08:54:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:12:17.063 08:54:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:17.063 08:54:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:17.063 08:54:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:17.063 08:54:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:17.063 08:54:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:17.063 08:54:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:17.063 08:54:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:17.063 08:54:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:17.063 08:54:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:12:17.323 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:12:17.323 00:12:17.323 CPU options: 00:12:17.323 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:12:17.323 (like [0,1,10]) 00:12:17.323 --lcores lcore to CPU mapping list. The list is in the format: 00:12:17.323 [<,lcores[@CPUs]>...] 00:12:17.323 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:12:17.323 Within the group, '-' is used for range separator, 00:12:17.323 ',' is used for single number separator. 00:12:17.323 '( )' can be omitted for single element group, 00:12:17.323 '@' can be omitted if cpus and lcores have the same value 00:12:17.323 --disable-cpumask-locks Disable CPU core lock files. 00:12:17.323 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:12:17.323 pollers in the app support interrupt mode) 00:12:17.323 -p, --main-core main (primary) core for DPDK 00:12:17.323 00:12:17.323 Configuration options: 00:12:17.323 -c, --config, --json JSON config file 00:12:17.323 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:12:17.323 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:12:17.323 --wait-for-rpc wait for RPCs to initialize subsystems 00:12:17.323 --rpcs-allowed comma-separated list of permitted RPCS 00:12:17.323 --json-ignore-init-errors don't exit on invalid config entry 00:12:17.323 00:12:17.323 Memory options: 00:12:17.323 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:12:17.323 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:12:17.323 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:12:17.323 -R, --huge-unlink unlink huge files after initialization 00:12:17.323 -n, --mem-channels number of memory channels used for DPDK 00:12:17.323 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:12:17.323 --msg-mempool-size global message memory pool size in count (default: 262143) 00:12:17.323 --no-huge run without using hugepages 00:12:17.323 -i, --shm-id shared memory ID (optional) 00:12:17.323 -g, --single-file-segments force creating just one hugetlbfs file 00:12:17.323 00:12:17.323 PCI options: 00:12:17.323 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:12:17.323 -B, --pci-blocked pci addr to block (can be used more than once) 00:12:17.323 -u, --no-pci disable PCI access 00:12:17.323 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:12:17.323 00:12:17.323 Log options: 00:12:17.323 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:12:17.323 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:12:17.323 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:12:17.323 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:12:17.323 blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, 00:12:17.323 json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, 00:12:17.323 nvme_auth, nvme_cuse, nvme_vfio, opal, reactor, rpc, rpc_client, scsi, 00:12:17.323 sock, sock_posix, thread, trace, uring, vbdev_delay, vbdev_gpt, 00:12:17.323 vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, vbdev_zone_block, 00:12:17.323 vfio_pci, vfio_user, vfu, vfu_virtio, vfu_virtio_blk, vfu_virtio_io, 00:12:17.323 vfu_virtio_scsi, vfu_virtio_scsi_data, virtio, virtio_blk, virtio_dev, 00:12:17.323 virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:12:17.323 --silence-noticelog disable notice level logging to stderr 00:12:17.323 00:12:17.323 Trace options: 00:12:17.323 --num-trace-entries number of trace entries for each core, must be power of 2, 00:12:17.323 setting 0 to disable trace (default 32768) 00:12:17.323 Tracepoints vary in size and can use more than one trace entry. 00:12:17.323 -e, --tpoint-group [: 128 )) 00:12:17.583 08:54:24 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:17.583 08:54:24 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:17.583 00:12:17.583 real 0m0.179s 00:12:17.583 user 0m0.100s 00:12:17.583 sys 0m0.077s 00:12:17.583 ************************************ 00:12:17.583 END TEST dd_double_input 00:12:17.583 ************************************ 00:12:17.583 08:54:24 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:17.583 08:54:24 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:12:17.583 08:54:24 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:12:17.583 08:54:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:17.583 08:54:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:17.583 08:54:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:17.583 ************************************ 00:12:17.583 START TEST dd_double_output 00:12:17.583 ************************************ 00:12:17.583 08:54:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1125 -- # double_output 00:12:17.583 08:54:24 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:12:17.583 08:54:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@650 -- # local es=0 00:12:17.583 08:54:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:12:17.583 08:54:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:17.583 08:54:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:17.583 08:54:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:17.583 08:54:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:17.583 08:54:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:17.583 08:54:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:17.583 08:54:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:17.583 08:54:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:17.583 08:54:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:12:17.842 [2024-07-25 08:54:24.701251] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:12:17.842 08:54:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@653 -- # es=22 00:12:17.842 08:54:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:17.842 08:54:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:17.842 08:54:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:17.842 00:12:17.842 real 0m0.177s 00:12:17.842 user 0m0.102s 00:12:17.842 sys 0m0.072s 00:12:17.842 08:54:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:17.842 ************************************ 00:12:17.842 END TEST dd_double_output 00:12:17.842 ************************************ 00:12:17.842 08:54:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:12:17.842 08:54:24 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:12:17.842 08:54:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:17.842 08:54:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:17.842 08:54:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:17.842 ************************************ 00:12:17.842 START TEST dd_no_input 00:12:17.842 ************************************ 00:12:17.842 08:54:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1125 -- # no_input 00:12:17.842 08:54:24 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:12:17.842 08:54:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@650 -- # local es=0 00:12:17.842 08:54:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:12:17.842 08:54:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:17.842 08:54:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:17.842 08:54:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:17.842 08:54:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:17.842 08:54:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:17.842 08:54:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:17.842 08:54:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:17.842 08:54:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:17.842 08:54:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:12:17.842 [2024-07-25 08:54:24.944023] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:12:18.101 08:54:25 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@653 -- # es=22 00:12:18.101 08:54:25 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:18.101 ************************************ 00:12:18.101 END TEST dd_no_input 00:12:18.101 ************************************ 00:12:18.101 08:54:25 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:18.101 08:54:25 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:18.101 00:12:18.101 real 0m0.188s 00:12:18.101 user 0m0.104s 00:12:18.101 sys 0m0.082s 00:12:18.101 08:54:25 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:18.101 08:54:25 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:12:18.101 08:54:25 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:12:18.101 08:54:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:18.101 08:54:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:18.101 08:54:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:18.101 ************************************ 00:12:18.101 START TEST dd_no_output 00:12:18.101 ************************************ 00:12:18.101 08:54:25 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1125 -- # no_output 00:12:18.101 08:54:25 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:12:18.101 08:54:25 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@650 -- # local es=0 00:12:18.101 08:54:25 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:12:18.101 08:54:25 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:18.101 08:54:25 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:18.101 08:54:25 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:18.101 08:54:25 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:18.101 08:54:25 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:18.101 08:54:25 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:18.101 08:54:25 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:18.101 08:54:25 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:18.101 08:54:25 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:12:18.101 [2024-07-25 08:54:25.151622] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:12:18.101 08:54:25 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@653 -- # es=22 00:12:18.101 08:54:25 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:18.101 08:54:25 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:18.101 ************************************ 00:12:18.101 END TEST dd_no_output 00:12:18.101 ************************************ 00:12:18.101 08:54:25 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:18.101 00:12:18.101 real 0m0.154s 00:12:18.101 user 0m0.083s 00:12:18.101 sys 0m0.071s 00:12:18.101 08:54:25 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:18.101 08:54:25 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:12:18.360 08:54:25 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:12:18.360 08:54:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:18.360 08:54:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:18.360 08:54:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:18.360 ************************************ 00:12:18.360 START TEST dd_wrong_blocksize 00:12:18.360 ************************************ 00:12:18.360 08:54:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1125 -- # wrong_blocksize 00:12:18.360 08:54:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:12:18.360 08:54:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@650 -- # local es=0 00:12:18.360 08:54:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:12:18.360 08:54:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:18.360 08:54:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:18.360 08:54:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:18.360 08:54:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:18.360 08:54:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:18.360 08:54:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:18.360 08:54:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:18.360 08:54:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:18.360 08:54:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:12:18.360 [2024-07-25 08:54:25.363713] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:12:18.360 08:54:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@653 -- # es=22 00:12:18.360 08:54:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:18.360 08:54:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:18.360 08:54:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:18.360 00:12:18.360 real 0m0.167s 00:12:18.360 user 0m0.086s 00:12:18.360 sys 0m0.079s 00:12:18.360 ************************************ 00:12:18.360 END TEST dd_wrong_blocksize 00:12:18.360 ************************************ 00:12:18.360 08:54:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:18.360 08:54:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:12:18.360 08:54:25 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:12:18.360 08:54:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:18.360 08:54:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:18.360 08:54:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:18.360 ************************************ 00:12:18.360 START TEST dd_smaller_blocksize 00:12:18.360 ************************************ 00:12:18.360 08:54:25 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1125 -- # smaller_blocksize 00:12:18.360 08:54:25 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:12:18.360 08:54:25 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@650 -- # local es=0 00:12:18.360 08:54:25 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:12:18.360 08:54:25 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:18.360 08:54:25 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:18.360 08:54:25 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:18.619 08:54:25 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:18.619 08:54:25 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:18.619 08:54:25 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:18.619 08:54:25 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:18.619 08:54:25 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:18.619 08:54:25 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:12:18.619 [2024-07-25 08:54:25.598863] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:18.619 [2024-07-25 08:54:25.599024] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66528 ] 00:12:18.878 [2024-07-25 08:54:25.765510] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:19.137 [2024-07-25 08:54:26.050524] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:19.395 [2024-07-25 08:54:26.255939] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:19.654 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:12:19.654 [2024-07-25 08:54:26.733783] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:12:19.654 [2024-07-25 08:54:26.733928] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:20.588 [2024-07-25 08:54:27.481924] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:12:20.847 08:54:27 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@653 -- # es=244 00:12:20.847 08:54:27 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:20.847 08:54:27 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@662 -- # es=116 00:12:20.847 08:54:27 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # case "$es" in 00:12:20.847 08:54:27 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@670 -- # es=1 00:12:20.847 08:54:27 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:20.847 00:12:20.847 real 0m2.463s 00:12:20.847 user 0m1.785s 00:12:20.847 sys 0m0.561s 00:12:20.847 08:54:27 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:20.847 ************************************ 00:12:20.847 END TEST dd_smaller_blocksize 00:12:20.847 ************************************ 00:12:20.847 08:54:27 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:12:21.106 08:54:27 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:12:21.106 08:54:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:21.106 08:54:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:21.106 08:54:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:21.106 ************************************ 00:12:21.106 START TEST dd_invalid_count 00:12:21.106 ************************************ 00:12:21.106 08:54:27 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1125 -- # invalid_count 00:12:21.106 08:54:27 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:12:21.106 08:54:27 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@650 -- # local es=0 00:12:21.106 08:54:27 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:12:21.107 08:54:27 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:21.107 08:54:27 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:21.107 08:54:27 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:21.107 08:54:27 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:21.107 08:54:27 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:21.107 08:54:27 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:21.107 08:54:27 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:21.107 08:54:27 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:21.107 08:54:27 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:12:21.107 [2024-07-25 08:54:28.081341] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:12:21.107 08:54:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@653 -- # es=22 00:12:21.107 08:54:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:21.107 08:54:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:21.107 08:54:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:21.107 00:12:21.107 real 0m0.147s 00:12:21.107 user 0m0.073s 00:12:21.107 sys 0m0.073s 00:12:21.107 08:54:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:21.107 ************************************ 00:12:21.107 END TEST dd_invalid_count 00:12:21.107 ************************************ 00:12:21.107 08:54:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:12:21.107 08:54:28 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:12:21.107 08:54:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:21.107 08:54:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:21.107 08:54:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:21.107 ************************************ 00:12:21.107 START TEST dd_invalid_oflag 00:12:21.107 ************************************ 00:12:21.107 08:54:28 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1125 -- # invalid_oflag 00:12:21.107 08:54:28 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:12:21.107 08:54:28 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@650 -- # local es=0 00:12:21.107 08:54:28 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:12:21.107 08:54:28 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:21.107 08:54:28 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:21.107 08:54:28 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:21.107 08:54:28 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:21.107 08:54:28 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:21.107 08:54:28 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:21.107 08:54:28 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:21.107 08:54:28 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:21.107 08:54:28 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:12:21.366 [2024-07-25 08:54:28.274800] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:12:21.366 ************************************ 00:12:21.366 END TEST dd_invalid_oflag 00:12:21.366 ************************************ 00:12:21.366 08:54:28 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@653 -- # es=22 00:12:21.366 08:54:28 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:21.366 08:54:28 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:21.366 08:54:28 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:21.366 00:12:21.366 real 0m0.147s 00:12:21.366 user 0m0.082s 00:12:21.366 sys 0m0.063s 00:12:21.366 08:54:28 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:21.366 08:54:28 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:12:21.366 08:54:28 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:12:21.366 08:54:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:21.366 08:54:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:21.366 08:54:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:21.366 ************************************ 00:12:21.366 START TEST dd_invalid_iflag 00:12:21.366 ************************************ 00:12:21.366 08:54:28 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1125 -- # invalid_iflag 00:12:21.366 08:54:28 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:12:21.366 08:54:28 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@650 -- # local es=0 00:12:21.366 08:54:28 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:12:21.366 08:54:28 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:21.366 08:54:28 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:21.366 08:54:28 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:21.366 08:54:28 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:21.366 08:54:28 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:21.366 08:54:28 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:21.366 08:54:28 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:21.366 08:54:28 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:21.366 08:54:28 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:12:21.625 [2024-07-25 08:54:28.511693] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:12:21.625 08:54:28 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@653 -- # es=22 00:12:21.625 08:54:28 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:21.625 ************************************ 00:12:21.625 END TEST dd_invalid_iflag 00:12:21.625 ************************************ 00:12:21.625 08:54:28 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:21.625 08:54:28 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:21.625 00:12:21.625 real 0m0.194s 00:12:21.625 user 0m0.109s 00:12:21.625 sys 0m0.082s 00:12:21.625 08:54:28 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:21.625 08:54:28 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:12:21.625 08:54:28 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:12:21.625 08:54:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:21.625 08:54:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:21.625 08:54:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:21.625 ************************************ 00:12:21.625 START TEST dd_unknown_flag 00:12:21.625 ************************************ 00:12:21.625 08:54:28 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1125 -- # unknown_flag 00:12:21.625 08:54:28 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:12:21.625 08:54:28 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@650 -- # local es=0 00:12:21.625 08:54:28 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:12:21.625 08:54:28 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:21.625 08:54:28 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:21.625 08:54:28 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:21.625 08:54:28 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:21.625 08:54:28 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:21.625 08:54:28 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:21.625 08:54:28 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:21.625 08:54:28 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:21.625 08:54:28 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:12:21.883 [2024-07-25 08:54:28.782705] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:21.883 [2024-07-25 08:54:28.782905] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66645 ] 00:12:21.883 [2024-07-25 08:54:28.943874] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:22.140 [2024-07-25 08:54:29.184728] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:22.398 [2024-07-25 08:54:29.389692] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:22.398 [2024-07-25 08:54:29.493836] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:12:22.398 [2024-07-25 08:54:29.493958] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:22.398 [2024-07-25 08:54:29.494040] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:12:22.398 [2024-07-25 08:54:29.494077] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:22.398 [2024-07-25 08:54:29.494352] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:12:22.398 [2024-07-25 08:54:29.494375] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:22.398 [2024-07-25 08:54:29.494443] app.c:1040:app_stop: *NOTICE*: spdk_app_stop called twice 00:12:22.398 [2024-07-25 08:54:29.494458] app.c:1040:app_stop: *NOTICE*: spdk_app_stop called twice 00:12:23.405 [2024-07-25 08:54:30.229992] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:12:23.663 08:54:30 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@653 -- # es=234 00:12:23.664 08:54:30 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:23.664 08:54:30 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@662 -- # es=106 00:12:23.664 08:54:30 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # case "$es" in 00:12:23.664 08:54:30 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@670 -- # es=1 00:12:23.664 08:54:30 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:23.664 00:12:23.664 real 0m2.048s 00:12:23.664 user 0m1.644s 00:12:23.664 sys 0m0.296s 00:12:23.664 08:54:30 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:23.664 ************************************ 00:12:23.664 END TEST dd_unknown_flag 00:12:23.664 ************************************ 00:12:23.664 08:54:30 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:12:23.664 08:54:30 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:12:23.664 08:54:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:23.664 08:54:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:23.664 08:54:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:23.664 ************************************ 00:12:23.664 START TEST dd_invalid_json 00:12:23.664 ************************************ 00:12:23.664 08:54:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1125 -- # invalid_json 00:12:23.664 08:54:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:12:23.664 08:54:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # : 00:12:23.664 08:54:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@650 -- # local es=0 00:12:23.664 08:54:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:12:23.664 08:54:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:23.664 08:54:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:23.664 08:54:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:23.664 08:54:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:23.664 08:54:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:23.664 08:54:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:23.664 08:54:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:23.664 08:54:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:23.664 08:54:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:12:23.922 [2024-07-25 08:54:30.817412] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:23.922 [2024-07-25 08:54:30.817590] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66693 ] 00:12:23.922 [2024-07-25 08:54:30.981302] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:24.180 [2024-07-25 08:54:31.225777] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:24.180 [2024-07-25 08:54:31.225892] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:12:24.180 [2024-07-25 08:54:31.225924] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:12:24.180 [2024-07-25 08:54:31.225940] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:24.180 [2024-07-25 08:54:31.226033] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:12:24.747 08:54:31 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@653 -- # es=234 00:12:24.747 08:54:31 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:24.747 08:54:31 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@662 -- # es=106 00:12:24.747 08:54:31 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # case "$es" in 00:12:24.747 08:54:31 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@670 -- # es=1 00:12:24.747 08:54:31 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:24.747 ************************************ 00:12:24.747 END TEST dd_invalid_json 00:12:24.747 ************************************ 00:12:24.747 00:12:24.747 real 0m0.939s 00:12:24.747 user 0m0.678s 00:12:24.747 sys 0m0.156s 00:12:24.747 08:54:31 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:24.747 08:54:31 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:12:24.747 ************************************ 00:12:24.747 END TEST spdk_dd_negative 00:12:24.747 ************************************ 00:12:24.747 00:12:24.747 real 0m7.660s 00:12:24.747 user 0m5.150s 00:12:24.747 sys 0m2.116s 00:12:24.747 08:54:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:24.747 08:54:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:24.747 00:12:24.747 real 3m24.290s 00:12:24.747 user 2m45.747s 00:12:24.747 sys 1m11.131s 00:12:24.747 ************************************ 00:12:24.747 END TEST spdk_dd 00:12:24.747 ************************************ 00:12:24.747 08:54:31 spdk_dd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:24.747 08:54:31 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:12:24.747 08:54:31 -- spdk/autotest.sh@215 -- # '[' 0 -eq 1 ']' 00:12:24.747 08:54:31 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:12:24.747 08:54:31 -- spdk/autotest.sh@264 -- # timing_exit lib 00:12:24.747 08:54:31 -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:24.747 08:54:31 -- common/autotest_common.sh@10 -- # set +x 00:12:24.747 08:54:31 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:12:24.747 08:54:31 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:12:24.747 08:54:31 -- spdk/autotest.sh@283 -- # '[' 1 -eq 1 ']' 00:12:24.747 08:54:31 -- spdk/autotest.sh@284 -- # export NET_TYPE 00:12:24.747 08:54:31 -- spdk/autotest.sh@287 -- # '[' tcp = rdma ']' 00:12:24.747 08:54:31 -- spdk/autotest.sh@290 -- # '[' tcp = tcp ']' 00:12:24.747 08:54:31 -- spdk/autotest.sh@291 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:12:24.747 08:54:31 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:24.747 08:54:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:24.747 08:54:31 -- common/autotest_common.sh@10 -- # set +x 00:12:24.747 ************************************ 00:12:24.747 START TEST nvmf_tcp 00:12:24.747 ************************************ 00:12:24.747 08:54:31 nvmf_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:12:25.006 * Looking for test storage... 00:12:25.006 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:12:25.006 08:54:31 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:12:25.006 08:54:31 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:12:25.006 08:54:31 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:12:25.006 08:54:31 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:25.006 08:54:31 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:25.006 08:54:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:25.006 ************************************ 00:12:25.006 START TEST nvmf_target_core 00:12:25.006 ************************************ 00:12:25.006 08:54:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:12:25.006 * Looking for test storage... 00:12:25.006 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:12:25.006 08:54:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:12:25.006 08:54:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:12:25.006 08:54:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:25.006 08:54:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:12:25.006 08:54:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:25.006 08:54:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:25.007 08:54:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:25.007 08:54:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:25.007 08:54:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:25.007 08:54:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:25.007 08:54:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:25.007 08:54:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:25.007 08:54:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:25.007 08:54:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:25.007 08:54:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:12:25.007 08:54:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:12:25.007 08:54:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:25.007 08:54:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:25.007 08:54:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:25.007 08:54:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:25.007 08:54:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:25.007 08:54:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:25.007 08:54:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:25.007 08:54:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:25.007 08:54:32 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.007 08:54:32 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.007 08:54:32 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.007 08:54:32 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:12:25.007 08:54:32 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.007 08:54:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # : 0 00:12:25.007 08:54:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:25.007 08:54:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:25.007 08:54:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:25.007 08:54:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:25.007 08:54:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:25.007 08:54:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:25.007 08:54:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:25.007 08:54:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:25.007 08:54:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:12:25.007 08:54:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:12:25.007 08:54:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:12:25.007 08:54:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:12:25.007 08:54:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:25.007 08:54:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:25.007 08:54:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:25.007 ************************************ 00:12:25.007 START TEST nvmf_host_management 00:12:25.007 ************************************ 00:12:25.007 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:12:25.266 * Looking for test storage... 00:12:25.266 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:25.266 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:25.266 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:12:25.266 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:25.266 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:25.266 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:25.266 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:25.266 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:25.266 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:25.267 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:25.267 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:25.267 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:25.267 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:25.267 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:12:25.267 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:12:25.267 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:25.267 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:25.267 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:25.267 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:25.267 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:25.267 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:25.267 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:25.267 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:25.267 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.267 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.267 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.267 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:12:25.267 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.267 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:12:25.267 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:25.267 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:25.267 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:25.267 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:25.267 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:25.267 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:25.267 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:25.267 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:25.267 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:25.267 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:25.267 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:12:25.267 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:25.267 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:25.267 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:25.267 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:25.267 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:25.267 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:25.267 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:25.267 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:25.267 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:25.267 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:25.267 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:25.267 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:25.267 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:25.267 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:25.267 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:25.267 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:25.267 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:25.267 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:25.267 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:25.267 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:25.267 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:25.267 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:25.267 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:25.267 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:25.267 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:25.267 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:25.267 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:25.267 Cannot find device "nvmf_init_br" 00:12:25.267 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # true 00:12:25.267 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:25.267 Cannot find device "nvmf_tgt_br" 00:12:25.267 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # true 00:12:25.267 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:25.267 Cannot find device "nvmf_tgt_br2" 00:12:25.267 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # true 00:12:25.267 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:25.267 Cannot find device "nvmf_init_br" 00:12:25.267 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # true 00:12:25.267 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:25.267 Cannot find device "nvmf_tgt_br" 00:12:25.267 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # true 00:12:25.267 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:25.267 Cannot find device "nvmf_tgt_br2" 00:12:25.267 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # true 00:12:25.267 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:25.267 Cannot find device "nvmf_br" 00:12:25.267 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # true 00:12:25.267 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:25.267 Cannot find device "nvmf_init_if" 00:12:25.267 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@161 -- # true 00:12:25.267 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:25.267 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:25.268 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:12:25.268 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:25.268 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:25.268 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:12:25.268 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:25.268 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:25.268 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:25.268 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:25.268 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:25.268 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:25.268 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:25.268 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:25.268 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:25.268 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:25.268 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:25.268 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:25.268 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:25.268 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:25.268 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:25.268 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:25.527 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:25.527 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:25.527 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:25.527 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:25.527 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:25.527 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:25.527 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:25.527 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:25.527 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:25.527 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.122 ms 00:12:25.527 00:12:25.527 --- 10.0.0.2 ping statistics --- 00:12:25.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:25.527 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:12:25.527 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:25.527 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:25.527 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:12:25.527 00:12:25.527 --- 10.0.0.3 ping statistics --- 00:12:25.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:25.527 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:12:25.527 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:25.527 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:25.527 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:12:25.527 00:12:25.527 --- 10.0.0.1 ping statistics --- 00:12:25.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:25.527 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:12:25.527 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:25.527 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@433 -- # return 0 00:12:25.527 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:25.527 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:25.527 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:25.527 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:25.527 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:25.527 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:25.527 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:25.527 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:12:25.527 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:12:25.527 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:12:25.527 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:25.527 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:25.527 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:25.527 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=66980 00:12:25.527 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:12:25.527 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 66980 00:12:25.527 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 66980 ']' 00:12:25.527 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:25.527 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:25.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:25.527 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:25.527 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:25.527 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:25.786 [2024-07-25 08:54:32.667901] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:25.786 [2024-07-25 08:54:32.668050] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:25.786 [2024-07-25 08:54:32.841218] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:26.045 [2024-07-25 08:54:33.137413] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:26.045 [2024-07-25 08:54:33.137513] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:26.045 [2024-07-25 08:54:33.137541] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:26.045 [2024-07-25 08:54:33.137565] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:26.045 [2024-07-25 08:54:33.137595] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:26.045 [2024-07-25 08:54:33.137910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:26.045 [2024-07-25 08:54:33.138662] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:26.045 [2024-07-25 08:54:33.138852] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:26.045 [2024-07-25 08:54:33.138864] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:12:26.303 [2024-07-25 08:54:33.347888] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:26.561 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:26.561 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:12:26.561 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:26.562 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:26.562 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:26.562 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:26.562 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:26.562 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.562 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:26.820 [2024-07-25 08:54:33.679976] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:26.820 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.820 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:12:26.820 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:26.820 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:26.821 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:12:26.821 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:12:26.821 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:12:26.821 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.821 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:26.821 Malloc0 00:12:26.821 [2024-07-25 08:54:33.798187] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:26.821 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.821 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:12:26.821 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:26.821 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:26.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:26.821 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=67042 00:12:26.821 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 67042 /var/tmp/bdevperf.sock 00:12:26.821 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 67042 ']' 00:12:26.821 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:26.821 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:26.821 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:26.821 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:12:26.821 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:26.821 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:12:26.821 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:26.821 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:12:26.821 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:12:26.821 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:26.821 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:26.821 { 00:12:26.821 "params": { 00:12:26.821 "name": "Nvme$subsystem", 00:12:26.821 "trtype": "$TEST_TRANSPORT", 00:12:26.821 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:26.821 "adrfam": "ipv4", 00:12:26.821 "trsvcid": "$NVMF_PORT", 00:12:26.821 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:26.821 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:26.821 "hdgst": ${hdgst:-false}, 00:12:26.821 "ddgst": ${ddgst:-false} 00:12:26.821 }, 00:12:26.821 "method": "bdev_nvme_attach_controller" 00:12:26.821 } 00:12:26.821 EOF 00:12:26.821 )") 00:12:26.821 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:12:26.821 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:12:26.821 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:12:26.821 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:26.821 "params": { 00:12:26.821 "name": "Nvme0", 00:12:26.821 "trtype": "tcp", 00:12:26.821 "traddr": "10.0.0.2", 00:12:26.821 "adrfam": "ipv4", 00:12:26.821 "trsvcid": "4420", 00:12:26.821 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:26.821 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:12:26.821 "hdgst": false, 00:12:26.821 "ddgst": false 00:12:26.821 }, 00:12:26.821 "method": "bdev_nvme_attach_controller" 00:12:26.821 }' 00:12:27.079 [2024-07-25 08:54:33.976322] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:27.079 [2024-07-25 08:54:33.976470] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67042 ] 00:12:27.079 [2024-07-25 08:54:34.139943] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:27.337 [2024-07-25 08:54:34.413908] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.596 [2024-07-25 08:54:34.648909] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:27.854 Running I/O for 10 seconds... 00:12:27.854 08:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:27.854 08:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:12:27.854 08:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:12:27.854 08:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.854 08:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:27.854 08:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.854 08:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:27.854 08:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:12:27.854 08:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:12:27.854 08:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:12:27.854 08:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:12:27.854 08:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:12:27.854 08:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:12:27.854 08:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:12:27.854 08:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:12:27.854 08:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.854 08:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:12:27.854 08:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:27.854 08:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.112 08:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:12:28.112 08:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:12:28.112 08:54:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:12:28.371 08:54:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:12:28.371 08:54:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:12:28.371 08:54:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:12:28.371 08:54:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:12:28.371 08:54:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.371 08:54:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:28.371 08:54:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.372 08:54:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:12:28.372 08:54:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:12:28.372 08:54:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:12:28.372 08:54:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:12:28.372 08:54:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:12:28.372 08:54:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:12:28.372 08:54:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.372 08:54:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:28.372 08:54:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.372 08:54:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:12:28.372 08:54:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.372 08:54:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:28.372 08:54:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.372 08:54:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:12:28.372 [2024-07-25 08:54:35.317235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:28.372 [2024-07-25 08:54:35.317314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.372 [2024-07-25 08:54:35.317371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:28.372 [2024-07-25 08:54:35.317393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.372 [2024-07-25 08:54:35.317413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:28.372 [2024-07-25 08:54:35.317429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.372 [2024-07-25 08:54:35.317447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:28.372 [2024-07-25 08:54:35.317462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.372 [2024-07-25 08:54:35.317480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:28.372 [2024-07-25 08:54:35.317495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.372 [2024-07-25 08:54:35.317512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:28.372 [2024-07-25 08:54:35.317527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.372 [2024-07-25 08:54:35.317549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:28.372 [2024-07-25 08:54:35.317573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.372 [2024-07-25 08:54:35.317604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:28.372 [2024-07-25 08:54:35.317630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.372 [2024-07-25 08:54:35.317661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:28.372 [2024-07-25 08:54:35.317686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.372 [2024-07-25 08:54:35.317716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:28.372 [2024-07-25 08:54:35.317741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.372 [2024-07-25 08:54:35.317772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:28.372 [2024-07-25 08:54:35.317799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.372 [2024-07-25 08:54:35.317854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:28.372 [2024-07-25 08:54:35.317880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.372 [2024-07-25 08:54:35.317910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:28.372 [2024-07-25 08:54:35.317933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.372 [2024-07-25 08:54:35.317980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:28.372 [2024-07-25 08:54:35.318004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.372 [2024-07-25 08:54:35.318033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:28.372 [2024-07-25 08:54:35.318062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.372 [2024-07-25 08:54:35.318082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:28.372 [2024-07-25 08:54:35.318097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.372 [2024-07-25 08:54:35.318115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:28.372 [2024-07-25 08:54:35.318152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.372 [2024-07-25 08:54:35.318174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:28.372 [2024-07-25 08:54:35.318189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.372 [2024-07-25 08:54:35.318207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:28.372 [2024-07-25 08:54:35.318222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.372 [2024-07-25 08:54:35.318240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:28.372 [2024-07-25 08:54:35.318254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.372 [2024-07-25 08:54:35.318272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:28.372 [2024-07-25 08:54:35.318286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.373 [2024-07-25 08:54:35.318304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:28.373 [2024-07-25 08:54:35.318318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.373 [2024-07-25 08:54:35.318335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:28.373 [2024-07-25 08:54:35.318350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.373 [2024-07-25 08:54:35.318367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:28.373 [2024-07-25 08:54:35.318381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.373 [2024-07-25 08:54:35.318399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:28.373 [2024-07-25 08:54:35.318413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.373 [2024-07-25 08:54:35.318431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:28.373 [2024-07-25 08:54:35.318445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.373 [2024-07-25 08:54:35.318463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:28.373 [2024-07-25 08:54:35.318477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.373 [2024-07-25 08:54:35.318495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:28.373 [2024-07-25 08:54:35.318509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.373 [2024-07-25 08:54:35.318527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:28.373 [2024-07-25 08:54:35.318541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.373 [2024-07-25 08:54:35.318558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:28.373 [2024-07-25 08:54:35.318573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.373 [2024-07-25 08:54:35.318593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:28.373 [2024-07-25 08:54:35.318607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.373 [2024-07-25 08:54:35.318624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:28.373 [2024-07-25 08:54:35.318639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.373 [2024-07-25 08:54:35.318657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:28.373 [2024-07-25 08:54:35.318677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.373 [2024-07-25 08:54:35.318695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:28.373 [2024-07-25 08:54:35.318709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.373 [2024-07-25 08:54:35.318727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:28.373 [2024-07-25 08:54:35.318741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.373 [2024-07-25 08:54:35.318759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:28.373 [2024-07-25 08:54:35.318773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.373 [2024-07-25 08:54:35.318790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:28.373 [2024-07-25 08:54:35.318804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.373 [2024-07-25 08:54:35.318837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:28.373 [2024-07-25 08:54:35.318854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.373 [2024-07-25 08:54:35.318872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:28.373 [2024-07-25 08:54:35.318886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.373 [2024-07-25 08:54:35.318903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:28.373 [2024-07-25 08:54:35.318936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.373 [2024-07-25 08:54:35.318954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:28.373 [2024-07-25 08:54:35.318969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.373 [2024-07-25 08:54:35.318987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:28.373 [2024-07-25 08:54:35.319001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.373 [2024-07-25 08:54:35.319019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:28.373 [2024-07-25 08:54:35.319033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.373 [2024-07-25 08:54:35.319051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:28.373 [2024-07-25 08:54:35.319066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.373 [2024-07-25 08:54:35.319084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:28.373 [2024-07-25 08:54:35.319098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.373 [2024-07-25 08:54:35.319116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:28.373 [2024-07-25 08:54:35.319130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.373 [2024-07-25 08:54:35.319148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:28.373 [2024-07-25 08:54:35.319162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.374 [2024-07-25 08:54:35.319180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:28.374 [2024-07-25 08:54:35.319194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.374 [2024-07-25 08:54:35.319212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:28.374 [2024-07-25 08:54:35.319235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.374 [2024-07-25 08:54:35.319253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:28.374 [2024-07-25 08:54:35.319267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.374 [2024-07-25 08:54:35.319292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:28.374 [2024-07-25 08:54:35.319308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.374 [2024-07-25 08:54:35.319326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:28.374 [2024-07-25 08:54:35.319341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.374 [2024-07-25 08:54:35.319363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:28.374 [2024-07-25 08:54:35.319380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.374 [2024-07-25 08:54:35.319397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:28.374 [2024-07-25 08:54:35.319411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.374 [2024-07-25 08:54:35.319429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:28.374 [2024-07-25 08:54:35.319443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.374 [2024-07-25 08:54:35.319460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:28.374 [2024-07-25 08:54:35.319474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.374 [2024-07-25 08:54:35.319492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:28.374 [2024-07-25 08:54:35.319506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.374 [2024-07-25 08:54:35.319525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:28.374 [2024-07-25 08:54:35.319541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.374 [2024-07-25 08:54:35.319559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:28.374 [2024-07-25 08:54:35.319573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.374 [2024-07-25 08:54:35.319590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:28.374 [2024-07-25 08:54:35.319604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.374 [2024-07-25 08:54:35.319621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:28.374 [2024-07-25 08:54:35.319635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.374 [2024-07-25 08:54:35.319652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:28.374 [2024-07-25 08:54:35.319666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.374 [2024-07-25 08:54:35.319683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:28.374 [2024-07-25 08:54:35.319697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.374 [2024-07-25 08:54:35.319714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:28.374 [2024-07-25 08:54:35.319730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.374 [2024-07-25 08:54:35.319746] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b500 is same with the state(5) to be set 00:12:28.374 [2024-07-25 08:54:35.320085] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002b500 was disconnected and freed. reset controller. 00:12:28.374 [2024-07-25 08:54:35.320345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:12:28.374 [2024-07-25 08:54:35.320371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.374 [2024-07-25 08:54:35.320399] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:12:28.374 [2024-07-25 08:54:35.320414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.374 [2024-07-25 08:54:35.320429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:12:28.374 [2024-07-25 08:54:35.320443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.374 [2024-07-25 08:54:35.320458] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:12:28.374 [2024-07-25 08:54:35.320472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.374 [2024-07-25 08:54:35.320486] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ad80 is same with the state(5) to be set 00:12:28.374 [2024-07-25 08:54:35.322069] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:12:28.374 task offset: 73728 on job bdev=Nvme0n1 fails 00:12:28.374 00:12:28.374 Latency(us) 00:12:28.374 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:28.374 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:12:28.374 Job: Nvme0n1 ended in about 0.46 seconds with error 00:12:28.374 Verification LBA range: start 0x0 length 0x400 00:12:28.374 Nvme0n1 : 0.46 1250.63 78.16 138.96 0.00 44689.29 4289.63 43372.92 00:12:28.374 =================================================================================================================== 00:12:28.374 Total : 1250.63 78.16 138.96 0.00 44689.29 4289.63 43372.92 00:12:28.374 [2024-07-25 08:54:35.328241] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:28.374 [2024-07-25 08:54:35.328508] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:12:28.374 [2024-07-25 08:54:35.342449] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:29.331 08:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 67042 00:12:29.331 08:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:12:29.331 08:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:12:29.331 08:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:12:29.331 08:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:12:29.331 08:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:12:29.331 08:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:29.332 08:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:29.332 { 00:12:29.332 "params": { 00:12:29.332 "name": "Nvme$subsystem", 00:12:29.332 "trtype": "$TEST_TRANSPORT", 00:12:29.332 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:29.332 "adrfam": "ipv4", 00:12:29.332 "trsvcid": "$NVMF_PORT", 00:12:29.332 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:29.332 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:29.332 "hdgst": ${hdgst:-false}, 00:12:29.332 "ddgst": ${ddgst:-false} 00:12:29.332 }, 00:12:29.332 "method": "bdev_nvme_attach_controller" 00:12:29.332 } 00:12:29.332 EOF 00:12:29.332 )") 00:12:29.332 08:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:12:29.332 08:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:12:29.332 08:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:12:29.332 08:54:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:29.332 "params": { 00:12:29.332 "name": "Nvme0", 00:12:29.332 "trtype": "tcp", 00:12:29.332 "traddr": "10.0.0.2", 00:12:29.332 "adrfam": "ipv4", 00:12:29.332 "trsvcid": "4420", 00:12:29.332 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:29.332 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:12:29.332 "hdgst": false, 00:12:29.332 "ddgst": false 00:12:29.332 }, 00:12:29.332 "method": "bdev_nvme_attach_controller" 00:12:29.332 }' 00:12:29.332 [2024-07-25 08:54:36.426779] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:29.332 [2024-07-25 08:54:36.426966] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67084 ] 00:12:29.589 [2024-07-25 08:54:36.601825] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:29.848 [2024-07-25 08:54:36.868616] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:30.106 [2024-07-25 08:54:37.099425] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:30.364 Running I/O for 1 seconds... 00:12:31.297 00:12:31.297 Latency(us) 00:12:31.297 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:31.297 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:12:31.297 Verification LBA range: start 0x0 length 0x400 00:12:31.298 Nvme0n1 : 1.04 1289.72 80.61 0.00 0.00 48682.37 6642.97 45517.73 00:12:31.298 =================================================================================================================== 00:12:31.298 Total : 1289.72 80.61 0.00 0.00 48682.37 6642.97 45517.73 00:12:32.673 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 68: 67042 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:12:32.673 08:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:12:32.673 08:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:12:32.673 08:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:12:32.673 08:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:12:32.673 08:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:12:32.673 08:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:32.673 08:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:12:32.673 08:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:32.673 08:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:12:32.673 08:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:32.673 08:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:32.673 rmmod nvme_tcp 00:12:32.673 rmmod nvme_fabrics 00:12:32.673 rmmod nvme_keyring 00:12:32.673 08:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:32.673 08:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:12:32.673 08:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:12:32.673 08:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 66980 ']' 00:12:32.673 08:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 66980 00:12:32.673 08:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 66980 ']' 00:12:32.673 08:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 66980 00:12:32.673 08:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:12:32.673 08:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:32.673 08:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66980 00:12:32.673 killing process with pid 66980 00:12:32.673 08:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:12:32.673 08:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:12:32.673 08:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66980' 00:12:32.673 08:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 66980 00:12:32.673 08:54:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 66980 00:12:34.046 [2024-07-25 08:54:41.024307] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:12:34.046 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:34.046 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:34.046 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:34.046 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:34.046 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:34.046 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:34.046 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:34.046 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:34.046 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:34.046 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:12:34.305 ************************************ 00:12:34.305 END TEST nvmf_host_management 00:12:34.305 ************************************ 00:12:34.305 00:12:34.305 real 0m9.111s 00:12:34.305 user 0m35.760s 00:12:34.305 sys 0m1.989s 00:12:34.305 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:34.305 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:34.305 08:54:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:12:34.305 08:54:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:34.305 08:54:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:34.305 08:54:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:34.305 ************************************ 00:12:34.305 START TEST nvmf_lvol 00:12:34.305 ************************************ 00:12:34.305 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:12:34.305 * Looking for test storage... 00:12:34.305 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:34.305 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:34.305 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:12:34.305 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:34.305 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:34.305 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:34.305 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:34.305 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:34.305 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:34.305 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:34.305 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:34.305 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:34.305 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:34.305 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:12:34.305 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:12:34.305 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:34.305 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:34.305 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:34.305 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:34.305 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:34.305 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:34.305 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:34.305 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:34.305 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.305 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.305 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.305 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:12:34.305 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.305 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:12:34.305 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:34.305 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:34.305 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:34.305 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:34.305 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:34.305 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:34.305 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:34.305 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:34.305 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:34.305 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:34.305 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:12:34.305 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:12:34.305 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:34.305 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:12:34.306 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:34.306 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:34.306 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:34.306 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:34.306 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:34.306 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:34.306 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:34.306 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:34.306 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:34.306 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:34.306 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:34.306 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:34.306 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:34.306 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:34.306 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:34.306 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:34.306 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:34.306 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:34.306 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:34.306 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:34.306 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:34.306 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:34.306 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:34.306 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:34.306 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:34.306 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:34.306 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:34.306 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:34.306 Cannot find device "nvmf_tgt_br" 00:12:34.306 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # true 00:12:34.306 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:34.306 Cannot find device "nvmf_tgt_br2" 00:12:34.306 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # true 00:12:34.306 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:34.306 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:34.306 Cannot find device "nvmf_tgt_br" 00:12:34.306 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # true 00:12:34.306 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:34.306 Cannot find device "nvmf_tgt_br2" 00:12:34.306 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # true 00:12:34.306 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:34.564 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:34.564 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:34.564 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:34.564 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:12:34.564 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:34.564 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:34.564 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:12:34.564 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:34.564 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:34.564 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:34.564 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:34.564 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:34.564 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:34.564 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:34.564 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:34.564 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:34.564 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:34.564 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:34.564 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:34.564 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:34.564 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:34.564 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:34.564 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:34.564 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:34.564 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:34.564 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:34.564 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:34.564 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:34.564 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:34.564 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:34.564 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:34.564 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:34.564 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:12:34.564 00:12:34.564 --- 10.0.0.2 ping statistics --- 00:12:34.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:34.564 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:12:34.564 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:34.564 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:34.564 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.031 ms 00:12:34.564 00:12:34.564 --- 10.0.0.3 ping statistics --- 00:12:34.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:34.564 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:12:34.564 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:34.564 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:34.564 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.057 ms 00:12:34.564 00:12:34.564 --- 10.0.0.1 ping statistics --- 00:12:34.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:34.564 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:12:34.564 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:34.564 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@433 -- # return 0 00:12:34.564 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:34.564 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:34.564 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:34.564 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:34.564 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:34.564 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:34.564 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:34.822 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:12:34.822 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:34.822 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:34.822 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:34.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:34.822 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=67323 00:12:34.822 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 67323 00:12:34.822 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:12:34.822 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 67323 ']' 00:12:34.822 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:34.822 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:34.822 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:34.822 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:34.822 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:34.822 [2024-07-25 08:54:41.802549] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:34.822 [2024-07-25 08:54:41.802742] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:35.090 [2024-07-25 08:54:41.986477] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:35.377 [2024-07-25 08:54:42.330634] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:35.377 [2024-07-25 08:54:42.330707] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:35.378 [2024-07-25 08:54:42.330726] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:35.378 [2024-07-25 08:54:42.330741] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:35.378 [2024-07-25 08:54:42.330754] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:35.378 [2024-07-25 08:54:42.330938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:35.378 [2024-07-25 08:54:42.331686] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:35.378 [2024-07-25 08:54:42.331704] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:35.636 [2024-07-25 08:54:42.539725] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:35.636 08:54:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:35.636 08:54:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:12:35.637 08:54:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:35.637 08:54:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:35.637 08:54:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:35.637 08:54:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:35.637 08:54:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:35.895 [2024-07-25 08:54:42.967209] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:36.154 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:36.413 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:12:36.413 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:36.672 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:12:36.672 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:12:36.930 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:12:37.189 08:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=eae950ed-e248-495a-b866-fe32e3f459e1 00:12:37.189 08:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u eae950ed-e248-495a-b866-fe32e3f459e1 lvol 20 00:12:37.447 08:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=d60ecf3f-afbd-43bc-8a55-fc89429ce5a0 00:12:37.447 08:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:37.705 08:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d60ecf3f-afbd-43bc-8a55-fc89429ce5a0 00:12:38.270 08:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:38.270 [2024-07-25 08:54:45.310302] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:38.270 08:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:38.528 08:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=67407 00:12:38.528 08:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:12:38.528 08:54:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:12:39.902 08:54:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot d60ecf3f-afbd-43bc-8a55-fc89429ce5a0 MY_SNAPSHOT 00:12:39.902 08:54:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=5d773e67-6164-4679-bfce-65f73f85de75 00:12:39.902 08:54:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize d60ecf3f-afbd-43bc-8a55-fc89429ce5a0 30 00:12:40.159 08:54:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 5d773e67-6164-4679-bfce-65f73f85de75 MY_CLONE 00:12:40.417 08:54:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=78dcd4b0-0ecf-457a-99d1-954d1f0acff8 00:12:40.417 08:54:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 78dcd4b0-0ecf-457a-99d1-954d1f0acff8 00:12:41.012 08:54:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 67407 00:12:49.131 Initializing NVMe Controllers 00:12:49.131 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:12:49.131 Controller IO queue size 128, less than required. 00:12:49.131 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:49.131 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:12:49.131 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:12:49.131 Initialization complete. Launching workers. 00:12:49.131 ======================================================== 00:12:49.131 Latency(us) 00:12:49.131 Device Information : IOPS MiB/s Average min max 00:12:49.131 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 8300.70 32.42 15418.10 368.83 201670.07 00:12:49.131 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 8183.20 31.97 15647.97 4680.66 211235.95 00:12:49.131 ======================================================== 00:12:49.131 Total : 16483.90 64.39 15532.22 368.83 211235.95 00:12:49.131 00:12:49.131 08:54:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:49.389 08:54:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete d60ecf3f-afbd-43bc-8a55-fc89429ce5a0 00:12:49.647 08:54:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u eae950ed-e248-495a-b866-fe32e3f459e1 00:12:50.212 08:54:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:12:50.212 08:54:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:12:50.212 08:54:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:12:50.212 08:54:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:50.212 08:54:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:12:50.212 08:54:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:50.212 08:54:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:12:50.212 08:54:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:50.212 08:54:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:50.212 rmmod nvme_tcp 00:12:50.212 rmmod nvme_fabrics 00:12:50.212 rmmod nvme_keyring 00:12:50.212 08:54:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:50.212 08:54:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:12:50.212 08:54:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:12:50.212 08:54:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 67323 ']' 00:12:50.212 08:54:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 67323 00:12:50.212 08:54:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 67323 ']' 00:12:50.213 08:54:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 67323 00:12:50.213 08:54:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:12:50.213 08:54:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:50.213 08:54:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67323 00:12:50.213 killing process with pid 67323 00:12:50.213 08:54:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:50.213 08:54:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:50.213 08:54:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67323' 00:12:50.213 08:54:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 67323 00:12:50.213 08:54:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 67323 00:12:52.114 08:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:52.114 08:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:52.114 08:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:52.114 08:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:52.114 08:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:52.114 08:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:52.114 08:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:52.114 08:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:52.114 08:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:52.114 ************************************ 00:12:52.114 END TEST nvmf_lvol 00:12:52.114 ************************************ 00:12:52.114 00:12:52.114 real 0m17.579s 00:12:52.114 user 1m9.446s 00:12:52.114 sys 0m4.301s 00:12:52.114 08:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:52.114 08:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:52.114 08:54:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:12:52.114 08:54:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:52.114 08:54:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:52.114 08:54:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:52.114 ************************************ 00:12:52.114 START TEST nvmf_lvs_grow 00:12:52.114 ************************************ 00:12:52.114 08:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:12:52.114 * Looking for test storage... 00:12:52.114 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:52.114 08:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:52.114 08:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:12:52.114 08:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:52.114 08:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:52.114 08:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:52.114 08:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:52.114 08:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:52.114 08:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:52.114 08:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:52.114 08:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:52.114 08:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:52.114 08:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:52.114 08:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:12:52.114 08:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:12:52.114 08:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:52.114 08:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:52.114 08:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:52.114 08:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:52.114 08:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:52.114 08:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:52.114 08:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:52.114 08:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:52.114 08:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.114 08:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.114 08:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.114 08:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:12:52.114 08:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.114 08:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:12:52.114 08:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:52.114 08:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:52.114 08:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:52.114 08:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:52.114 08:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:52.114 08:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:52.114 08:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:52.114 08:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:52.114 08:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:52.114 08:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:52.114 08:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:12:52.114 08:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:52.114 08:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:52.114 08:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:52.114 08:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:52.114 08:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:52.114 08:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:52.114 08:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:52.114 08:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:52.114 08:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:52.114 08:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:52.114 08:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:52.115 08:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:52.115 08:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:52.115 08:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:52.115 08:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:52.115 08:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:52.115 08:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:52.115 08:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:52.115 08:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:52.115 08:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:52.115 08:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:52.115 08:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:52.115 08:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:52.115 08:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:52.115 08:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:52.115 08:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:52.115 08:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:52.115 08:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:52.115 Cannot find device "nvmf_tgt_br" 00:12:52.115 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # true 00:12:52.115 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:52.115 Cannot find device "nvmf_tgt_br2" 00:12:52.115 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # true 00:12:52.115 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:52.115 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:52.115 Cannot find device "nvmf_tgt_br" 00:12:52.115 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # true 00:12:52.115 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:52.115 Cannot find device "nvmf_tgt_br2" 00:12:52.115 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # true 00:12:52.115 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:52.115 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:52.115 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:52.115 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:52.115 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:12:52.115 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:52.115 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:52.115 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:12:52.115 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:52.115 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:52.115 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:52.115 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:52.115 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:52.115 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:52.115 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:52.115 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:52.115 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:52.115 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:52.115 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:52.115 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:52.115 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:52.115 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:52.115 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:52.374 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:52.374 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:52.374 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:52.374 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:52.374 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:52.374 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:52.374 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:52.374 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:52.374 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:52.374 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:52.374 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:12:52.374 00:12:52.374 --- 10.0.0.2 ping statistics --- 00:12:52.374 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:52.374 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:12:52.374 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:52.374 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:52.374 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.031 ms 00:12:52.374 00:12:52.374 --- 10.0.0.3 ping statistics --- 00:12:52.374 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:52.374 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:12:52.374 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:52.374 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:52.374 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:12:52.374 00:12:52.374 --- 10.0.0.1 ping statistics --- 00:12:52.374 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:52.374 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:12:52.374 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:52.374 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@433 -- # return 0 00:12:52.374 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:52.374 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:52.374 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:52.374 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:52.374 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:52.374 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:52.374 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:52.374 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:12:52.374 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:52.374 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:52.374 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:52.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:52.374 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=67750 00:12:52.374 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:52.374 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 67750 00:12:52.374 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 67750 ']' 00:12:52.374 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:52.374 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:52.374 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:52.374 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:52.374 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:52.374 [2024-07-25 08:54:59.474200] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:52.374 [2024-07-25 08:54:59.474652] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:52.632 [2024-07-25 08:54:59.655359] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:52.890 [2024-07-25 08:55:00.001691] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:52.890 [2024-07-25 08:55:00.002009] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:52.890 [2024-07-25 08:55:00.002039] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:52.890 [2024-07-25 08:55:00.002056] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:52.890 [2024-07-25 08:55:00.002069] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:52.890 [2024-07-25 08:55:00.002121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.148 [2024-07-25 08:55:00.208551] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:53.405 08:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:53.405 08:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:12:53.405 08:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:53.405 08:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:53.405 08:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:53.405 08:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:53.405 08:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:53.663 [2024-07-25 08:55:00.667694] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:53.663 08:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:12:53.663 08:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:53.663 08:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:53.663 08:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:53.663 ************************************ 00:12:53.663 START TEST lvs_grow_clean 00:12:53.663 ************************************ 00:12:53.663 08:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:12:53.663 08:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:12:53.663 08:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:12:53.663 08:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:12:53.663 08:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:12:53.663 08:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:12:53.664 08:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:12:53.664 08:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:12:53.664 08:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:12:53.664 08:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:53.921 08:55:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:12:53.921 08:55:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:12:54.487 08:55:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=a2262c49-b68c-42ce-a445-950e6da90620 00:12:54.487 08:55:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a2262c49-b68c-42ce-a445-950e6da90620 00:12:54.487 08:55:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:12:54.745 08:55:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:12:54.745 08:55:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:12:54.745 08:55:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u a2262c49-b68c-42ce-a445-950e6da90620 lvol 150 00:12:55.004 08:55:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=4de2a17e-5df3-4cca-8b9a-f83274510556 00:12:55.004 08:55:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:12:55.004 08:55:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:12:55.263 [2024-07-25 08:55:02.180415] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:12:55.263 [2024-07-25 08:55:02.180571] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:12:55.263 true 00:12:55.263 08:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a2262c49-b68c-42ce-a445-950e6da90620 00:12:55.263 08:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:12:55.521 08:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:12:55.521 08:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:55.779 08:55:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4de2a17e-5df3-4cca-8b9a-f83274510556 00:12:56.038 08:55:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:56.297 [2024-07-25 08:55:03.297322] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:56.297 08:55:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:56.555 08:55:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=67839 00:12:56.555 08:55:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:12:56.555 08:55:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:56.555 08:55:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 67839 /var/tmp/bdevperf.sock 00:12:56.555 08:55:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 67839 ']' 00:12:56.555 08:55:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:56.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:56.555 08:55:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:56.555 08:55:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:56.555 08:55:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:56.555 08:55:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:12:56.813 [2024-07-25 08:55:03.672764] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:56.813 [2024-07-25 08:55:03.672982] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67839 ] 00:12:56.813 [2024-07-25 08:55:03.849403] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:57.072 [2024-07-25 08:55:04.127128] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:57.330 [2024-07-25 08:55:04.332130] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:57.588 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:57.588 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:12:57.588 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:12:58.154 Nvme0n1 00:12:58.154 08:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:12:58.154 [ 00:12:58.154 { 00:12:58.154 "name": "Nvme0n1", 00:12:58.154 "aliases": [ 00:12:58.154 "4de2a17e-5df3-4cca-8b9a-f83274510556" 00:12:58.154 ], 00:12:58.154 "product_name": "NVMe disk", 00:12:58.154 "block_size": 4096, 00:12:58.154 "num_blocks": 38912, 00:12:58.154 "uuid": "4de2a17e-5df3-4cca-8b9a-f83274510556", 00:12:58.154 "assigned_rate_limits": { 00:12:58.154 "rw_ios_per_sec": 0, 00:12:58.154 "rw_mbytes_per_sec": 0, 00:12:58.154 "r_mbytes_per_sec": 0, 00:12:58.154 "w_mbytes_per_sec": 0 00:12:58.154 }, 00:12:58.154 "claimed": false, 00:12:58.154 "zoned": false, 00:12:58.154 "supported_io_types": { 00:12:58.154 "read": true, 00:12:58.154 "write": true, 00:12:58.154 "unmap": true, 00:12:58.154 "flush": true, 00:12:58.154 "reset": true, 00:12:58.154 "nvme_admin": true, 00:12:58.154 "nvme_io": true, 00:12:58.154 "nvme_io_md": false, 00:12:58.154 "write_zeroes": true, 00:12:58.154 "zcopy": false, 00:12:58.154 "get_zone_info": false, 00:12:58.154 "zone_management": false, 00:12:58.154 "zone_append": false, 00:12:58.154 "compare": true, 00:12:58.154 "compare_and_write": true, 00:12:58.154 "abort": true, 00:12:58.154 "seek_hole": false, 00:12:58.154 "seek_data": false, 00:12:58.154 "copy": true, 00:12:58.154 "nvme_iov_md": false 00:12:58.154 }, 00:12:58.154 "memory_domains": [ 00:12:58.154 { 00:12:58.154 "dma_device_id": "system", 00:12:58.154 "dma_device_type": 1 00:12:58.154 } 00:12:58.154 ], 00:12:58.154 "driver_specific": { 00:12:58.154 "nvme": [ 00:12:58.154 { 00:12:58.154 "trid": { 00:12:58.154 "trtype": "TCP", 00:12:58.154 "adrfam": "IPv4", 00:12:58.154 "traddr": "10.0.0.2", 00:12:58.154 "trsvcid": "4420", 00:12:58.154 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:12:58.154 }, 00:12:58.154 "ctrlr_data": { 00:12:58.154 "cntlid": 1, 00:12:58.154 "vendor_id": "0x8086", 00:12:58.154 "model_number": "SPDK bdev Controller", 00:12:58.154 "serial_number": "SPDK0", 00:12:58.155 "firmware_revision": "24.09", 00:12:58.155 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:58.155 "oacs": { 00:12:58.155 "security": 0, 00:12:58.155 "format": 0, 00:12:58.155 "firmware": 0, 00:12:58.155 "ns_manage": 0 00:12:58.155 }, 00:12:58.155 "multi_ctrlr": true, 00:12:58.155 "ana_reporting": false 00:12:58.155 }, 00:12:58.155 "vs": { 00:12:58.155 "nvme_version": "1.3" 00:12:58.155 }, 00:12:58.155 "ns_data": { 00:12:58.155 "id": 1, 00:12:58.155 "can_share": true 00:12:58.155 } 00:12:58.155 } 00:12:58.155 ], 00:12:58.155 "mp_policy": "active_passive" 00:12:58.155 } 00:12:58.155 } 00:12:58.155 ] 00:12:58.155 08:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=67862 00:12:58.155 08:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:12:58.155 08:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:58.413 Running I/O for 10 seconds... 00:12:59.347 Latency(us) 00:12:59.347 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:59.347 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:59.347 Nvme0n1 : 1.00 5842.00 22.82 0.00 0.00 0.00 0.00 0.00 00:12:59.347 =================================================================================================================== 00:12:59.347 Total : 5842.00 22.82 0.00 0.00 0.00 0.00 0.00 00:12:59.347 00:13:00.279 08:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a2262c49-b68c-42ce-a445-950e6da90620 00:13:00.279 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:00.279 Nvme0n1 : 2.00 5905.50 23.07 0.00 0.00 0.00 0.00 0.00 00:13:00.279 =================================================================================================================== 00:13:00.279 Total : 5905.50 23.07 0.00 0.00 0.00 0.00 0.00 00:13:00.279 00:13:00.537 true 00:13:00.538 08:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a2262c49-b68c-42ce-a445-950e6da90620 00:13:00.538 08:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:13:00.795 08:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:13:00.795 08:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:13:00.795 08:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 67862 00:13:01.360 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:01.360 Nvme0n1 : 3.00 5811.00 22.70 0.00 0.00 0.00 0.00 0.00 00:13:01.360 =================================================================================================================== 00:13:01.360 Total : 5811.00 22.70 0.00 0.00 0.00 0.00 0.00 00:13:01.360 00:13:02.295 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:02.295 Nvme0n1 : 4.00 5808.25 22.69 0.00 0.00 0.00 0.00 0.00 00:13:02.295 =================================================================================================================== 00:13:02.295 Total : 5808.25 22.69 0.00 0.00 0.00 0.00 0.00 00:13:02.295 00:13:03.671 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:03.671 Nvme0n1 : 5.00 5815.00 22.71 0.00 0.00 0.00 0.00 0.00 00:13:03.671 =================================================================================================================== 00:13:03.671 Total : 5815.00 22.71 0.00 0.00 0.00 0.00 0.00 00:13:03.671 00:13:04.605 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:04.605 Nvme0n1 : 6.00 5798.33 22.65 0.00 0.00 0.00 0.00 0.00 00:13:04.605 =================================================================================================================== 00:13:04.605 Total : 5798.33 22.65 0.00 0.00 0.00 0.00 0.00 00:13:04.605 00:13:05.538 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:05.538 Nvme0n1 : 7.00 5804.57 22.67 0.00 0.00 0.00 0.00 0.00 00:13:05.538 =================================================================================================================== 00:13:05.538 Total : 5804.57 22.67 0.00 0.00 0.00 0.00 0.00 00:13:05.538 00:13:06.469 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:06.469 Nvme0n1 : 8.00 5809.25 22.69 0.00 0.00 0.00 0.00 0.00 00:13:06.469 =================================================================================================================== 00:13:06.469 Total : 5809.25 22.69 0.00 0.00 0.00 0.00 0.00 00:13:06.469 00:13:07.401 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:07.401 Nvme0n1 : 9.00 5812.89 22.71 0.00 0.00 0.00 0.00 0.00 00:13:07.401 =================================================================================================================== 00:13:07.401 Total : 5812.89 22.71 0.00 0.00 0.00 0.00 0.00 00:13:07.401 00:13:08.337 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:08.337 Nvme0n1 : 10.00 5803.10 22.67 0.00 0.00 0.00 0.00 0.00 00:13:08.337 =================================================================================================================== 00:13:08.337 Total : 5803.10 22.67 0.00 0.00 0.00 0.00 0.00 00:13:08.337 00:13:08.337 00:13:08.337 Latency(us) 00:13:08.337 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:08.337 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:08.337 Nvme0n1 : 10.01 5809.56 22.69 0.00 0.00 22024.78 5004.57 71493.82 00:13:08.337 =================================================================================================================== 00:13:08.337 Total : 5809.56 22.69 0.00 0.00 22024.78 5004.57 71493.82 00:13:08.337 0 00:13:08.337 08:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 67839 00:13:08.337 08:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 67839 ']' 00:13:08.337 08:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 67839 00:13:08.337 08:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:13:08.337 08:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:08.337 08:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67839 00:13:08.337 killing process with pid 67839 00:13:08.337 Received shutdown signal, test time was about 10.000000 seconds 00:13:08.337 00:13:08.337 Latency(us) 00:13:08.337 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:08.337 =================================================================================================================== 00:13:08.337 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:08.337 08:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:08.337 08:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:08.337 08:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67839' 00:13:08.337 08:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 67839 00:13:08.337 08:55:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 67839 00:13:09.713 08:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:09.971 08:55:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:10.229 08:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a2262c49-b68c-42ce-a445-950e6da90620 00:13:10.229 08:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:13:10.487 08:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:13:10.487 08:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:13:10.487 08:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:10.745 [2024-07-25 08:55:17.750848] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:13:10.745 08:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a2262c49-b68c-42ce-a445-950e6da90620 00:13:10.745 08:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:13:10.745 08:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a2262c49-b68c-42ce-a445-950e6da90620 00:13:10.745 08:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:10.745 08:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:10.745 08:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:10.745 08:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:10.745 08:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:10.745 08:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:10.745 08:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:10.745 08:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:13:10.745 08:55:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a2262c49-b68c-42ce-a445-950e6da90620 00:13:11.002 request: 00:13:11.002 { 00:13:11.002 "uuid": "a2262c49-b68c-42ce-a445-950e6da90620", 00:13:11.002 "method": "bdev_lvol_get_lvstores", 00:13:11.002 "req_id": 1 00:13:11.002 } 00:13:11.002 Got JSON-RPC error response 00:13:11.002 response: 00:13:11.002 { 00:13:11.002 "code": -19, 00:13:11.002 "message": "No such device" 00:13:11.002 } 00:13:11.002 08:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:13:11.002 08:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:11.002 08:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:11.002 08:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:11.002 08:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:11.260 aio_bdev 00:13:11.260 08:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 4de2a17e-5df3-4cca-8b9a-f83274510556 00:13:11.260 08:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=4de2a17e-5df3-4cca-8b9a-f83274510556 00:13:11.260 08:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:11.260 08:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:13:11.260 08:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:11.260 08:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:11.260 08:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:11.517 08:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4de2a17e-5df3-4cca-8b9a-f83274510556 -t 2000 00:13:11.776 [ 00:13:11.776 { 00:13:11.776 "name": "4de2a17e-5df3-4cca-8b9a-f83274510556", 00:13:11.776 "aliases": [ 00:13:11.776 "lvs/lvol" 00:13:11.776 ], 00:13:11.776 "product_name": "Logical Volume", 00:13:11.776 "block_size": 4096, 00:13:11.776 "num_blocks": 38912, 00:13:11.776 "uuid": "4de2a17e-5df3-4cca-8b9a-f83274510556", 00:13:11.776 "assigned_rate_limits": { 00:13:11.776 "rw_ios_per_sec": 0, 00:13:11.776 "rw_mbytes_per_sec": 0, 00:13:11.776 "r_mbytes_per_sec": 0, 00:13:11.776 "w_mbytes_per_sec": 0 00:13:11.776 }, 00:13:11.776 "claimed": false, 00:13:11.776 "zoned": false, 00:13:11.776 "supported_io_types": { 00:13:11.776 "read": true, 00:13:11.776 "write": true, 00:13:11.776 "unmap": true, 00:13:11.776 "flush": false, 00:13:11.776 "reset": true, 00:13:11.776 "nvme_admin": false, 00:13:11.776 "nvme_io": false, 00:13:11.776 "nvme_io_md": false, 00:13:11.776 "write_zeroes": true, 00:13:11.776 "zcopy": false, 00:13:11.776 "get_zone_info": false, 00:13:11.776 "zone_management": false, 00:13:11.776 "zone_append": false, 00:13:11.776 "compare": false, 00:13:11.776 "compare_and_write": false, 00:13:11.776 "abort": false, 00:13:11.776 "seek_hole": true, 00:13:11.776 "seek_data": true, 00:13:11.776 "copy": false, 00:13:11.776 "nvme_iov_md": false 00:13:11.776 }, 00:13:11.776 "driver_specific": { 00:13:11.776 "lvol": { 00:13:11.776 "lvol_store_uuid": "a2262c49-b68c-42ce-a445-950e6da90620", 00:13:11.776 "base_bdev": "aio_bdev", 00:13:11.776 "thin_provision": false, 00:13:11.776 "num_allocated_clusters": 38, 00:13:11.776 "snapshot": false, 00:13:11.776 "clone": false, 00:13:11.776 "esnap_clone": false 00:13:11.776 } 00:13:11.776 } 00:13:11.776 } 00:13:11.776 ] 00:13:11.776 08:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:13:11.776 08:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a2262c49-b68c-42ce-a445-950e6da90620 00:13:11.776 08:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:13:12.343 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:13:12.343 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a2262c49-b68c-42ce-a445-950e6da90620 00:13:12.343 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:13:12.343 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:13:12.343 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 4de2a17e-5df3-4cca-8b9a-f83274510556 00:13:12.600 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a2262c49-b68c-42ce-a445-950e6da90620 00:13:12.859 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:13.117 08:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:13:13.684 00:13:13.684 ************************************ 00:13:13.684 END TEST lvs_grow_clean 00:13:13.684 ************************************ 00:13:13.684 real 0m19.825s 00:13:13.684 user 0m18.704s 00:13:13.684 sys 0m2.623s 00:13:13.684 08:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:13.684 08:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:13:13.684 08:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:13:13.684 08:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:13.684 08:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:13.684 08:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:13.684 ************************************ 00:13:13.684 START TEST lvs_grow_dirty 00:13:13.684 ************************************ 00:13:13.684 08:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:13:13.684 08:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:13:13.684 08:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:13:13.684 08:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:13:13.684 08:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:13:13.684 08:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:13:13.684 08:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:13:13.684 08:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:13:13.684 08:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:13:13.684 08:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:13.942 08:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:13:13.942 08:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:13:14.200 08:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=0639d63f-3589-4694-bf6b-9dc961d863ad 00:13:14.200 08:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:13:14.200 08:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0639d63f-3589-4694-bf6b-9dc961d863ad 00:13:14.458 08:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:13:14.458 08:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:13:14.458 08:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 0639d63f-3589-4694-bf6b-9dc961d863ad lvol 150 00:13:14.716 08:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=0892789f-81f3-4c4e-bcb9-46d13fb3392f 00:13:14.716 08:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:13:14.716 08:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:13:14.975 [2024-07-25 08:55:21.904460] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:13:14.975 [2024-07-25 08:55:21.904795] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:13:14.975 true 00:13:14.975 08:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:13:14.975 08:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0639d63f-3589-4694-bf6b-9dc961d863ad 00:13:15.233 08:55:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:13:15.233 08:55:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:15.491 08:55:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0892789f-81f3-4c4e-bcb9-46d13fb3392f 00:13:15.751 08:55:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:15.751 [2024-07-25 08:55:22.861291] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:16.092 08:55:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:16.092 08:55:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=68127 00:13:16.092 08:55:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:13:16.092 08:55:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:16.092 08:55:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 68127 /var/tmp/bdevperf.sock 00:13:16.092 08:55:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 68127 ']' 00:13:16.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:16.092 08:55:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:16.092 08:55:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:16.092 08:55:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:16.092 08:55:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:16.092 08:55:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:16.353 [2024-07-25 08:55:23.215336] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:16.353 [2024-07-25 08:55:23.215517] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68127 ] 00:13:16.353 [2024-07-25 08:55:23.392381] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:16.612 [2024-07-25 08:55:23.635189] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:16.871 [2024-07-25 08:55:23.841215] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:17.134 08:55:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:17.134 08:55:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:13:17.134 08:55:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:13:17.399 Nvme0n1 00:13:17.399 08:55:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:13:17.658 [ 00:13:17.658 { 00:13:17.658 "name": "Nvme0n1", 00:13:17.658 "aliases": [ 00:13:17.658 "0892789f-81f3-4c4e-bcb9-46d13fb3392f" 00:13:17.658 ], 00:13:17.658 "product_name": "NVMe disk", 00:13:17.658 "block_size": 4096, 00:13:17.658 "num_blocks": 38912, 00:13:17.658 "uuid": "0892789f-81f3-4c4e-bcb9-46d13fb3392f", 00:13:17.658 "assigned_rate_limits": { 00:13:17.658 "rw_ios_per_sec": 0, 00:13:17.658 "rw_mbytes_per_sec": 0, 00:13:17.658 "r_mbytes_per_sec": 0, 00:13:17.658 "w_mbytes_per_sec": 0 00:13:17.658 }, 00:13:17.658 "claimed": false, 00:13:17.658 "zoned": false, 00:13:17.658 "supported_io_types": { 00:13:17.658 "read": true, 00:13:17.658 "write": true, 00:13:17.658 "unmap": true, 00:13:17.658 "flush": true, 00:13:17.658 "reset": true, 00:13:17.658 "nvme_admin": true, 00:13:17.658 "nvme_io": true, 00:13:17.658 "nvme_io_md": false, 00:13:17.658 "write_zeroes": true, 00:13:17.658 "zcopy": false, 00:13:17.658 "get_zone_info": false, 00:13:17.658 "zone_management": false, 00:13:17.658 "zone_append": false, 00:13:17.658 "compare": true, 00:13:17.658 "compare_and_write": true, 00:13:17.658 "abort": true, 00:13:17.658 "seek_hole": false, 00:13:17.658 "seek_data": false, 00:13:17.658 "copy": true, 00:13:17.658 "nvme_iov_md": false 00:13:17.658 }, 00:13:17.658 "memory_domains": [ 00:13:17.658 { 00:13:17.658 "dma_device_id": "system", 00:13:17.658 "dma_device_type": 1 00:13:17.658 } 00:13:17.658 ], 00:13:17.658 "driver_specific": { 00:13:17.658 "nvme": [ 00:13:17.658 { 00:13:17.658 "trid": { 00:13:17.658 "trtype": "TCP", 00:13:17.658 "adrfam": "IPv4", 00:13:17.658 "traddr": "10.0.0.2", 00:13:17.658 "trsvcid": "4420", 00:13:17.658 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:13:17.658 }, 00:13:17.658 "ctrlr_data": { 00:13:17.658 "cntlid": 1, 00:13:17.658 "vendor_id": "0x8086", 00:13:17.658 "model_number": "SPDK bdev Controller", 00:13:17.658 "serial_number": "SPDK0", 00:13:17.658 "firmware_revision": "24.09", 00:13:17.658 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:17.658 "oacs": { 00:13:17.658 "security": 0, 00:13:17.658 "format": 0, 00:13:17.658 "firmware": 0, 00:13:17.658 "ns_manage": 0 00:13:17.658 }, 00:13:17.658 "multi_ctrlr": true, 00:13:17.658 "ana_reporting": false 00:13:17.658 }, 00:13:17.658 "vs": { 00:13:17.658 "nvme_version": "1.3" 00:13:17.658 }, 00:13:17.658 "ns_data": { 00:13:17.658 "id": 1, 00:13:17.658 "can_share": true 00:13:17.658 } 00:13:17.658 } 00:13:17.658 ], 00:13:17.658 "mp_policy": "active_passive" 00:13:17.658 } 00:13:17.658 } 00:13:17.658 ] 00:13:17.658 08:55:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=68145 00:13:17.658 08:55:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:17.659 08:55:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:13:17.917 Running I/O for 10 seconds... 00:13:18.851 Latency(us) 00:13:18.851 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:18.851 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:18.851 Nvme0n1 : 1.00 5969.00 23.32 0.00 0.00 0.00 0.00 0.00 00:13:18.851 =================================================================================================================== 00:13:18.851 Total : 5969.00 23.32 0.00 0.00 0.00 0.00 0.00 00:13:18.851 00:13:19.785 08:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 0639d63f-3589-4694-bf6b-9dc961d863ad 00:13:19.785 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:19.785 Nvme0n1 : 2.00 5969.00 23.32 0.00 0.00 0.00 0.00 0.00 00:13:19.785 =================================================================================================================== 00:13:19.785 Total : 5969.00 23.32 0.00 0.00 0.00 0.00 0.00 00:13:19.785 00:13:20.043 true 00:13:20.043 08:55:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0639d63f-3589-4694-bf6b-9dc961d863ad 00:13:20.043 08:55:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:13:20.331 08:55:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:13:20.331 08:55:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:13:20.331 08:55:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 68145 00:13:20.897 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:20.897 Nvme0n1 : 3.00 6011.33 23.48 0.00 0.00 0.00 0.00 0.00 00:13:20.897 =================================================================================================================== 00:13:20.897 Total : 6011.33 23.48 0.00 0.00 0.00 0.00 0.00 00:13:20.897 00:13:21.831 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:21.831 Nvme0n1 : 4.00 5788.25 22.61 0.00 0.00 0.00 0.00 0.00 00:13:21.831 =================================================================================================================== 00:13:21.831 Total : 5788.25 22.61 0.00 0.00 0.00 0.00 0.00 00:13:21.831 00:13:22.765 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:22.765 Nvme0n1 : 5.00 5773.60 22.55 0.00 0.00 0.00 0.00 0.00 00:13:22.765 =================================================================================================================== 00:13:22.765 Total : 5773.60 22.55 0.00 0.00 0.00 0.00 0.00 00:13:22.765 00:13:24.167 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:24.167 Nvme0n1 : 6.00 5763.83 22.51 0.00 0.00 0.00 0.00 0.00 00:13:24.167 =================================================================================================================== 00:13:24.167 Total : 5763.83 22.51 0.00 0.00 0.00 0.00 0.00 00:13:24.167 00:13:25.100 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:25.100 Nvme0n1 : 7.00 5756.86 22.49 0.00 0.00 0.00 0.00 0.00 00:13:25.100 =================================================================================================================== 00:13:25.100 Total : 5756.86 22.49 0.00 0.00 0.00 0.00 0.00 00:13:25.100 00:13:26.034 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:26.034 Nvme0n1 : 8.00 5767.50 22.53 0.00 0.00 0.00 0.00 0.00 00:13:26.034 =================================================================================================================== 00:13:26.034 Total : 5767.50 22.53 0.00 0.00 0.00 0.00 0.00 00:13:26.034 00:13:26.966 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:26.966 Nvme0n1 : 9.00 5761.67 22.51 0.00 0.00 0.00 0.00 0.00 00:13:26.966 =================================================================================================================== 00:13:26.966 Total : 5761.67 22.51 0.00 0.00 0.00 0.00 0.00 00:13:26.966 00:13:27.900 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:27.900 Nvme0n1 : 10.00 5744.30 22.44 0.00 0.00 0.00 0.00 0.00 00:13:27.900 =================================================================================================================== 00:13:27.900 Total : 5744.30 22.44 0.00 0.00 0.00 0.00 0.00 00:13:27.900 00:13:27.900 00:13:27.900 Latency(us) 00:13:27.900 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:27.900 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:27.900 Nvme0n1 : 10.01 5753.84 22.48 0.00 0.00 22239.35 16324.42 154426.65 00:13:27.900 =================================================================================================================== 00:13:27.900 Total : 5753.84 22.48 0.00 0.00 22239.35 16324.42 154426.65 00:13:27.900 0 00:13:27.900 08:55:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 68127 00:13:27.900 08:55:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 68127 ']' 00:13:27.900 08:55:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 68127 00:13:27.900 08:55:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:13:27.900 08:55:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:27.900 08:55:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68127 00:13:27.900 killing process with pid 68127 00:13:27.900 Received shutdown signal, test time was about 10.000000 seconds 00:13:27.900 00:13:27.900 Latency(us) 00:13:27.900 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:27.900 =================================================================================================================== 00:13:27.900 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:27.900 08:55:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:27.900 08:55:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:27.900 08:55:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68127' 00:13:27.900 08:55:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 68127 00:13:27.900 08:55:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 68127 00:13:29.273 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:29.273 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:29.531 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0639d63f-3589-4694-bf6b-9dc961d863ad 00:13:29.532 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:13:29.789 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:13:29.790 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:13:29.790 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 67750 00:13:29.790 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 67750 00:13:30.082 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 67750 Killed "${NVMF_APP[@]}" "$@" 00:13:30.083 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:13:30.083 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:13:30.083 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:30.083 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:30.083 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:30.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:30.083 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=68291 00:13:30.083 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 68291 00:13:30.083 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:30.083 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 68291 ']' 00:13:30.083 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:30.083 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:30.083 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:30.083 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:30.083 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:30.083 [2024-07-25 08:55:37.049935] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:30.083 [2024-07-25 08:55:37.050115] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:30.341 [2024-07-25 08:55:37.237392] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:30.598 [2024-07-25 08:55:37.477572] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:30.599 [2024-07-25 08:55:37.477641] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:30.599 [2024-07-25 08:55:37.477660] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:30.599 [2024-07-25 08:55:37.477677] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:30.599 [2024-07-25 08:55:37.477690] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:30.599 [2024-07-25 08:55:37.477740] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:30.599 [2024-07-25 08:55:37.684959] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:30.856 08:55:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:30.856 08:55:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:13:30.856 08:55:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:30.856 08:55:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:30.856 08:55:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:31.115 08:55:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:31.115 08:55:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:31.373 [2024-07-25 08:55:38.268976] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:13:31.373 [2024-07-25 08:55:38.269295] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:13:31.373 [2024-07-25 08:55:38.269500] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:13:31.373 08:55:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:13:31.373 08:55:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 0892789f-81f3-4c4e-bcb9-46d13fb3392f 00:13:31.373 08:55:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=0892789f-81f3-4c4e-bcb9-46d13fb3392f 00:13:31.373 08:55:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:31.373 08:55:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:13:31.373 08:55:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:31.373 08:55:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:31.373 08:55:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:31.631 08:55:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0892789f-81f3-4c4e-bcb9-46d13fb3392f -t 2000 00:13:31.890 [ 00:13:31.890 { 00:13:31.890 "name": "0892789f-81f3-4c4e-bcb9-46d13fb3392f", 00:13:31.890 "aliases": [ 00:13:31.890 "lvs/lvol" 00:13:31.890 ], 00:13:31.890 "product_name": "Logical Volume", 00:13:31.890 "block_size": 4096, 00:13:31.890 "num_blocks": 38912, 00:13:31.890 "uuid": "0892789f-81f3-4c4e-bcb9-46d13fb3392f", 00:13:31.890 "assigned_rate_limits": { 00:13:31.890 "rw_ios_per_sec": 0, 00:13:31.890 "rw_mbytes_per_sec": 0, 00:13:31.890 "r_mbytes_per_sec": 0, 00:13:31.890 "w_mbytes_per_sec": 0 00:13:31.890 }, 00:13:31.890 "claimed": false, 00:13:31.890 "zoned": false, 00:13:31.890 "supported_io_types": { 00:13:31.890 "read": true, 00:13:31.890 "write": true, 00:13:31.890 "unmap": true, 00:13:31.890 "flush": false, 00:13:31.890 "reset": true, 00:13:31.890 "nvme_admin": false, 00:13:31.890 "nvme_io": false, 00:13:31.890 "nvme_io_md": false, 00:13:31.890 "write_zeroes": true, 00:13:31.890 "zcopy": false, 00:13:31.890 "get_zone_info": false, 00:13:31.890 "zone_management": false, 00:13:31.890 "zone_append": false, 00:13:31.890 "compare": false, 00:13:31.890 "compare_and_write": false, 00:13:31.890 "abort": false, 00:13:31.890 "seek_hole": true, 00:13:31.890 "seek_data": true, 00:13:31.890 "copy": false, 00:13:31.890 "nvme_iov_md": false 00:13:31.890 }, 00:13:31.890 "driver_specific": { 00:13:31.890 "lvol": { 00:13:31.890 "lvol_store_uuid": "0639d63f-3589-4694-bf6b-9dc961d863ad", 00:13:31.890 "base_bdev": "aio_bdev", 00:13:31.890 "thin_provision": false, 00:13:31.890 "num_allocated_clusters": 38, 00:13:31.890 "snapshot": false, 00:13:31.890 "clone": false, 00:13:31.890 "esnap_clone": false 00:13:31.890 } 00:13:31.890 } 00:13:31.890 } 00:13:31.890 ] 00:13:31.890 08:55:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:13:31.890 08:55:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0639d63f-3589-4694-bf6b-9dc961d863ad 00:13:31.890 08:55:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:13:32.176 08:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:13:32.176 08:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:13:32.176 08:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0639d63f-3589-4694-bf6b-9dc961d863ad 00:13:32.456 08:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:13:32.456 08:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:32.714 [2024-07-25 08:55:39.598104] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:13:32.714 08:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0639d63f-3589-4694-bf6b-9dc961d863ad 00:13:32.714 08:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:13:32.714 08:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0639d63f-3589-4694-bf6b-9dc961d863ad 00:13:32.714 08:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:32.714 08:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:32.714 08:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:32.714 08:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:32.714 08:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:32.714 08:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:32.714 08:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:32.714 08:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:13:32.714 08:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0639d63f-3589-4694-bf6b-9dc961d863ad 00:13:32.972 request: 00:13:32.972 { 00:13:32.972 "uuid": "0639d63f-3589-4694-bf6b-9dc961d863ad", 00:13:32.972 "method": "bdev_lvol_get_lvstores", 00:13:32.972 "req_id": 1 00:13:32.972 } 00:13:32.972 Got JSON-RPC error response 00:13:32.972 response: 00:13:32.972 { 00:13:32.972 "code": -19, 00:13:32.972 "message": "No such device" 00:13:32.972 } 00:13:32.972 08:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:13:32.972 08:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:32.972 08:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:32.972 08:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:32.972 08:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:33.228 aio_bdev 00:13:33.228 08:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 0892789f-81f3-4c4e-bcb9-46d13fb3392f 00:13:33.228 08:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=0892789f-81f3-4c4e-bcb9-46d13fb3392f 00:13:33.228 08:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:33.228 08:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:13:33.228 08:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:33.228 08:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:33.228 08:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:33.486 08:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0892789f-81f3-4c4e-bcb9-46d13fb3392f -t 2000 00:13:33.744 [ 00:13:33.744 { 00:13:33.744 "name": "0892789f-81f3-4c4e-bcb9-46d13fb3392f", 00:13:33.744 "aliases": [ 00:13:33.744 "lvs/lvol" 00:13:33.744 ], 00:13:33.744 "product_name": "Logical Volume", 00:13:33.744 "block_size": 4096, 00:13:33.744 "num_blocks": 38912, 00:13:33.744 "uuid": "0892789f-81f3-4c4e-bcb9-46d13fb3392f", 00:13:33.744 "assigned_rate_limits": { 00:13:33.744 "rw_ios_per_sec": 0, 00:13:33.744 "rw_mbytes_per_sec": 0, 00:13:33.744 "r_mbytes_per_sec": 0, 00:13:33.744 "w_mbytes_per_sec": 0 00:13:33.744 }, 00:13:33.744 "claimed": false, 00:13:33.744 "zoned": false, 00:13:33.744 "supported_io_types": { 00:13:33.744 "read": true, 00:13:33.744 "write": true, 00:13:33.744 "unmap": true, 00:13:33.744 "flush": false, 00:13:33.744 "reset": true, 00:13:33.744 "nvme_admin": false, 00:13:33.744 "nvme_io": false, 00:13:33.744 "nvme_io_md": false, 00:13:33.744 "write_zeroes": true, 00:13:33.744 "zcopy": false, 00:13:33.744 "get_zone_info": false, 00:13:33.744 "zone_management": false, 00:13:33.744 "zone_append": false, 00:13:33.744 "compare": false, 00:13:33.744 "compare_and_write": false, 00:13:33.744 "abort": false, 00:13:33.744 "seek_hole": true, 00:13:33.744 "seek_data": true, 00:13:33.744 "copy": false, 00:13:33.745 "nvme_iov_md": false 00:13:33.745 }, 00:13:33.745 "driver_specific": { 00:13:33.745 "lvol": { 00:13:33.745 "lvol_store_uuid": "0639d63f-3589-4694-bf6b-9dc961d863ad", 00:13:33.745 "base_bdev": "aio_bdev", 00:13:33.745 "thin_provision": false, 00:13:33.745 "num_allocated_clusters": 38, 00:13:33.745 "snapshot": false, 00:13:33.745 "clone": false, 00:13:33.745 "esnap_clone": false 00:13:33.745 } 00:13:33.745 } 00:13:33.745 } 00:13:33.745 ] 00:13:33.745 08:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:13:33.745 08:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0639d63f-3589-4694-bf6b-9dc961d863ad 00:13:33.745 08:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:13:34.002 08:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:13:34.002 08:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0639d63f-3589-4694-bf6b-9dc961d863ad 00:13:34.002 08:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:13:34.260 08:55:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:13:34.260 08:55:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 0892789f-81f3-4c4e-bcb9-46d13fb3392f 00:13:34.518 08:55:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0639d63f-3589-4694-bf6b-9dc961d863ad 00:13:34.776 08:55:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:35.036 08:55:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:13:35.293 ************************************ 00:13:35.293 END TEST lvs_grow_dirty 00:13:35.293 ************************************ 00:13:35.293 00:13:35.293 real 0m21.713s 00:13:35.293 user 0m48.296s 00:13:35.293 sys 0m7.559s 00:13:35.293 08:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:35.293 08:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:35.293 08:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:13:35.293 08:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:13:35.293 08:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:13:35.293 08:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:13:35.293 08:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:13:35.293 08:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:13:35.293 08:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:13:35.293 08:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:13:35.294 08:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:13:35.294 nvmf_trace.0 00:13:35.294 08:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:13:35.294 08:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:13:35.294 08:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:35.294 08:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:13:35.858 08:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:35.858 08:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:13:35.858 08:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:35.858 08:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:35.858 rmmod nvme_tcp 00:13:35.858 rmmod nvme_fabrics 00:13:35.858 rmmod nvme_keyring 00:13:35.858 08:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:35.858 08:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:13:35.858 08:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:13:35.858 08:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 68291 ']' 00:13:35.858 08:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 68291 00:13:35.858 08:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 68291 ']' 00:13:35.858 08:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 68291 00:13:35.858 08:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:13:35.859 08:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:35.859 08:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68291 00:13:35.859 killing process with pid 68291 00:13:35.859 08:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:35.859 08:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:35.859 08:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68291' 00:13:35.859 08:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 68291 00:13:35.859 08:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 68291 00:13:37.235 08:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:37.235 08:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:37.235 08:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:37.235 08:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:37.235 08:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:37.235 08:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:37.235 08:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:37.235 08:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:37.235 08:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:37.235 00:13:37.235 real 0m45.145s 00:13:37.235 user 1m14.449s 00:13:37.235 sys 0m11.242s 00:13:37.235 08:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:37.235 08:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:37.235 ************************************ 00:13:37.235 END TEST nvmf_lvs_grow 00:13:37.235 ************************************ 00:13:37.235 08:55:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:13:37.235 08:55:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:37.235 08:55:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:37.235 08:55:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:37.235 ************************************ 00:13:37.235 START TEST nvmf_bdev_io_wait 00:13:37.235 ************************************ 00:13:37.236 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:13:37.236 * Looking for test storage... 00:13:37.236 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:37.236 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:37.236 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:13:37.236 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:37.236 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:37.236 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:37.236 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:37.236 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:37.236 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:37.236 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:37.236 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:37.236 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:37.236 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:37.236 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:13:37.236 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:13:37.236 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:37.236 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:37.236 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:37.236 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:37.236 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:37.236 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:37.236 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:37.236 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:37.236 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.236 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.236 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.236 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:13:37.236 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.236 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:13:37.236 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:37.236 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:37.236 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:37.236 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:37.236 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:37.236 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:37.236 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:37.236 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:37.236 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:37.236 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:37.236 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:13:37.236 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:37.236 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:37.236 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:37.236 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:37.236 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:37.236 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:37.236 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:37.236 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:37.236 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:37.236 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:37.236 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:37.236 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:37.236 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:37.236 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:37.236 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:37.236 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:37.236 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:37.236 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:37.236 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:37.236 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:37.236 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:37.236 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:37.236 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:37.236 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:37.236 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:37.236 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:37.236 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:37.236 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:37.236 Cannot find device "nvmf_tgt_br" 00:13:37.236 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # true 00:13:37.236 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:37.236 Cannot find device "nvmf_tgt_br2" 00:13:37.236 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # true 00:13:37.236 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:37.236 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:37.236 Cannot find device "nvmf_tgt_br" 00:13:37.236 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # true 00:13:37.236 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:37.236 Cannot find device "nvmf_tgt_br2" 00:13:37.237 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # true 00:13:37.237 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:37.237 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:37.237 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:37.237 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:37.237 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:13:37.237 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:37.237 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:37.237 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:13:37.237 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:37.237 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:37.237 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:37.237 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:37.237 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:37.237 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:37.237 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:37.237 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:37.237 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:37.237 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:37.237 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:37.496 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:37.496 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:37.496 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:37.496 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:37.496 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:37.496 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:37.496 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:37.496 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:37.496 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:37.496 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:37.496 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:37.496 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:37.496 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:37.496 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:37.496 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:13:37.496 00:13:37.496 --- 10.0.0.2 ping statistics --- 00:13:37.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:37.496 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:13:37.496 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:37.496 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:37.496 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:13:37.496 00:13:37.496 --- 10.0.0.3 ping statistics --- 00:13:37.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:37.496 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:13:37.496 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:37.496 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:37.496 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:13:37.496 00:13:37.496 --- 10.0.0.1 ping statistics --- 00:13:37.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:37.496 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:13:37.496 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:37.496 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@433 -- # return 0 00:13:37.496 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:37.496 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:37.496 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:37.496 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:37.496 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:37.496 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:37.496 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:37.496 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:13:37.496 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:37.496 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:37.496 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:37.496 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=68625 00:13:37.496 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:13:37.496 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 68625 00:13:37.496 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 68625 ']' 00:13:37.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:37.496 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:37.496 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:37.496 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:37.496 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:37.496 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:37.496 [2024-07-25 08:55:44.599965] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:37.496 [2024-07-25 08:55:44.600141] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:37.754 [2024-07-25 08:55:44.775851] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:38.013 [2024-07-25 08:55:45.067831] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:38.013 [2024-07-25 08:55:45.067932] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:38.013 [2024-07-25 08:55:45.067950] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:38.013 [2024-07-25 08:55:45.067966] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:38.013 [2024-07-25 08:55:45.067981] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:38.013 [2024-07-25 08:55:45.068201] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:38.013 [2024-07-25 08:55:45.068506] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:38.013 [2024-07-25 08:55:45.069934] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:38.013 [2024-07-25 08:55:45.069939] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:38.578 08:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:38.578 08:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:13:38.578 08:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:38.578 08:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:38.578 08:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:38.578 08:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:38.579 08:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:13:38.579 08:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.579 08:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:38.579 08:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.579 08:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:13:38.579 08:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.579 08:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:38.838 [2024-07-25 08:55:45.753985] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:38.838 08:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.838 08:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:38.838 08:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.838 08:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:38.838 [2024-07-25 08:55:45.777017] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:38.838 08:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.838 08:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:38.838 08:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.838 08:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:38.838 Malloc0 00:13:38.838 08:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.838 08:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:38.838 08:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.838 08:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:38.838 08:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.838 08:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:38.838 08:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.838 08:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:38.838 08:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.838 08:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:38.838 08:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.838 08:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:38.838 [2024-07-25 08:55:45.906162] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:38.838 08:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.838 08:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=68660 00:13:38.838 08:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:13:38.838 08:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:13:38.838 08:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:13:38.838 08:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=68662 00:13:38.838 08:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:13:38.838 08:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:38.838 08:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:38.838 { 00:13:38.838 "params": { 00:13:38.838 "name": "Nvme$subsystem", 00:13:38.838 "trtype": "$TEST_TRANSPORT", 00:13:38.838 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:38.838 "adrfam": "ipv4", 00:13:38.838 "trsvcid": "$NVMF_PORT", 00:13:38.838 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:38.838 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:38.838 "hdgst": ${hdgst:-false}, 00:13:38.838 "ddgst": ${ddgst:-false} 00:13:38.838 }, 00:13:38.838 "method": "bdev_nvme_attach_controller" 00:13:38.838 } 00:13:38.838 EOF 00:13:38.838 )") 00:13:38.838 08:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:13:38.838 08:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:13:38.838 08:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:13:38.838 08:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=68664 00:13:38.838 08:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:13:38.838 08:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:13:38.838 08:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:38.838 08:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:38.838 { 00:13:38.838 "params": { 00:13:38.838 "name": "Nvme$subsystem", 00:13:38.838 "trtype": "$TEST_TRANSPORT", 00:13:38.838 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:38.838 "adrfam": "ipv4", 00:13:38.838 "trsvcid": "$NVMF_PORT", 00:13:38.838 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:38.838 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:38.838 "hdgst": ${hdgst:-false}, 00:13:38.838 "ddgst": ${ddgst:-false} 00:13:38.838 }, 00:13:38.838 "method": "bdev_nvme_attach_controller" 00:13:38.838 } 00:13:38.838 EOF 00:13:38.838 )") 00:13:38.838 08:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:13:38.838 08:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=68667 00:13:38.838 08:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:13:38.838 08:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:13:38.838 08:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:13:38.838 08:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:13:38.838 08:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:13:38.838 08:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:13:38.838 08:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:13:38.838 08:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:38.838 08:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:38.838 { 00:13:38.838 "params": { 00:13:38.838 "name": "Nvme$subsystem", 00:13:38.838 "trtype": "$TEST_TRANSPORT", 00:13:38.838 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:38.838 "adrfam": "ipv4", 00:13:38.838 "trsvcid": "$NVMF_PORT", 00:13:38.839 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:38.839 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:38.839 "hdgst": ${hdgst:-false}, 00:13:38.839 "ddgst": ${ddgst:-false} 00:13:38.839 }, 00:13:38.839 "method": "bdev_nvme_attach_controller" 00:13:38.839 } 00:13:38.839 EOF 00:13:38.839 )") 00:13:38.839 08:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:13:38.839 08:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:13:38.839 08:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:38.839 "params": { 00:13:38.839 "name": "Nvme1", 00:13:38.839 "trtype": "tcp", 00:13:38.839 "traddr": "10.0.0.2", 00:13:38.839 "adrfam": "ipv4", 00:13:38.839 "trsvcid": "4420", 00:13:38.839 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:38.839 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:38.839 "hdgst": false, 00:13:38.839 "ddgst": false 00:13:38.839 }, 00:13:38.839 "method": "bdev_nvme_attach_controller" 00:13:38.839 }' 00:13:38.839 08:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:13:38.839 08:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:13:38.839 08:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:13:38.839 08:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:13:38.839 08:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:38.839 08:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:38.839 { 00:13:38.839 "params": { 00:13:38.839 "name": "Nvme$subsystem", 00:13:38.839 "trtype": "$TEST_TRANSPORT", 00:13:38.839 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:38.839 "adrfam": "ipv4", 00:13:38.839 "trsvcid": "$NVMF_PORT", 00:13:38.839 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:38.839 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:38.839 "hdgst": ${hdgst:-false}, 00:13:38.839 "ddgst": ${ddgst:-false} 00:13:38.839 }, 00:13:38.839 "method": "bdev_nvme_attach_controller" 00:13:38.839 } 00:13:38.839 EOF 00:13:38.839 )") 00:13:38.839 08:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:13:38.839 08:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:13:38.839 08:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:38.839 "params": { 00:13:38.839 "name": "Nvme1", 00:13:38.839 "trtype": "tcp", 00:13:38.839 "traddr": "10.0.0.2", 00:13:38.839 "adrfam": "ipv4", 00:13:38.839 "trsvcid": "4420", 00:13:38.839 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:38.839 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:38.839 "hdgst": false, 00:13:38.839 "ddgst": false 00:13:38.839 }, 00:13:38.839 "method": "bdev_nvme_attach_controller" 00:13:38.839 }' 00:13:38.839 08:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:13:38.839 08:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:13:38.839 08:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:13:38.839 08:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:38.839 "params": { 00:13:38.839 "name": "Nvme1", 00:13:38.839 "trtype": "tcp", 00:13:38.839 "traddr": "10.0.0.2", 00:13:38.839 "adrfam": "ipv4", 00:13:38.839 "trsvcid": "4420", 00:13:38.839 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:38.839 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:38.839 "hdgst": false, 00:13:38.839 "ddgst": false 00:13:38.839 }, 00:13:38.839 "method": "bdev_nvme_attach_controller" 00:13:38.839 }' 00:13:39.098 08:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:13:39.098 08:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:39.098 "params": { 00:13:39.098 "name": "Nvme1", 00:13:39.098 "trtype": "tcp", 00:13:39.098 "traddr": "10.0.0.2", 00:13:39.098 "adrfam": "ipv4", 00:13:39.098 "trsvcid": "4420", 00:13:39.098 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:39.098 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:39.098 "hdgst": false, 00:13:39.098 "ddgst": false 00:13:39.098 }, 00:13:39.098 "method": "bdev_nvme_attach_controller" 00:13:39.098 }' 00:13:39.098 08:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 68660 00:13:39.098 [2024-07-25 08:55:46.025336] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:39.098 [2024-07-25 08:55:46.025510] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:13:39.098 [2024-07-25 08:55:46.039281] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:39.099 [2024-07-25 08:55:46.039466] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:13:39.099 [2024-07-25 08:55:46.062317] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:39.099 [2024-07-25 08:55:46.062452] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:13:39.099 [2024-07-25 08:55:46.070926] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:39.099 [2024-07-25 08:55:46.071110] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:13:39.357 [2024-07-25 08:55:46.262805] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:39.357 [2024-07-25 08:55:46.340120] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:39.357 [2024-07-25 08:55:46.414415] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:39.615 [2024-07-25 08:55:46.494669] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:39.615 [2024-07-25 08:55:46.553987] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:13:39.615 [2024-07-25 08:55:46.592238] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:13:39.615 [2024-07-25 08:55:46.624020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:13:39.873 [2024-07-25 08:55:46.745031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:13:39.873 [2024-07-25 08:55:46.798633] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:39.873 [2024-07-25 08:55:46.826182] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:39.873 [2024-07-25 08:55:46.859039] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:39.873 [2024-07-25 08:55:46.940490] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:39.873 Running I/O for 1 seconds... 00:13:40.130 Running I/O for 1 seconds... 00:13:40.130 Running I/O for 1 seconds... 00:13:40.130 Running I/O for 1 seconds... 00:13:41.064 00:13:41.064 Latency(us) 00:13:41.064 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:41.064 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:13:41.064 Nvme1n1 : 1.02 4812.55 18.80 0.00 0.00 26381.49 3589.59 42181.35 00:13:41.064 =================================================================================================================== 00:13:41.064 Total : 4812.55 18.80 0.00 0.00 26381.49 3589.59 42181.35 00:13:41.064 00:13:41.064 Latency(us) 00:13:41.064 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:41.064 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:13:41.064 Nvme1n1 : 1.01 6716.34 26.24 0.00 0.00 18924.25 7626.01 27286.81 00:13:41.064 =================================================================================================================== 00:13:41.064 Total : 6716.34 26.24 0.00 0.00 18924.25 7626.01 27286.81 00:13:41.064 00:13:41.064 Latency(us) 00:13:41.064 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:41.064 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:13:41.064 Nvme1n1 : 1.01 4792.51 18.72 0.00 0.00 26606.86 6404.65 51713.86 00:13:41.064 =================================================================================================================== 00:13:41.064 Total : 4792.51 18.72 0.00 0.00 26606.86 6404.65 51713.86 00:13:41.064 00:13:41.064 Latency(us) 00:13:41.064 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:41.064 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:13:41.064 Nvme1n1 : 1.00 140859.65 550.23 0.00 0.00 905.64 450.56 1102.20 00:13:41.064 =================================================================================================================== 00:13:41.064 Total : 140859.65 550.23 0.00 0.00 905.64 450.56 1102.20 00:13:42.441 08:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 68662 00:13:42.441 08:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 68664 00:13:42.441 08:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 68667 00:13:42.441 08:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:42.441 08:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.441 08:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:42.441 08:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.441 08:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:13:42.441 08:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:13:42.441 08:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:42.441 08:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:13:42.441 08:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:42.441 08:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:13:42.441 08:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:42.441 08:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:42.441 rmmod nvme_tcp 00:13:42.441 rmmod nvme_fabrics 00:13:42.441 rmmod nvme_keyring 00:13:42.441 08:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:42.441 08:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:13:42.441 08:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:13:42.441 08:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 68625 ']' 00:13:42.441 08:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 68625 00:13:42.441 08:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 68625 ']' 00:13:42.441 08:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 68625 00:13:42.441 08:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:13:42.441 08:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:42.441 08:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68625 00:13:42.441 08:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:42.441 08:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:42.441 killing process with pid 68625 00:13:42.441 08:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68625' 00:13:42.441 08:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 68625 00:13:42.441 08:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 68625 00:13:43.376 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:43.376 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:43.376 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:43.376 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:43.376 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:43.376 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:43.376 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:43.376 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:43.635 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:43.635 00:13:43.635 real 0m6.466s 00:13:43.635 user 0m29.955s 00:13:43.635 sys 0m2.611s 00:13:43.635 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:43.635 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:43.635 ************************************ 00:13:43.635 END TEST nvmf_bdev_io_wait 00:13:43.635 ************************************ 00:13:43.635 08:55:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:13:43.635 08:55:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:43.636 08:55:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:43.636 08:55:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:43.636 ************************************ 00:13:43.636 START TEST nvmf_queue_depth 00:13:43.636 ************************************ 00:13:43.636 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:13:43.636 * Looking for test storage... 00:13:43.636 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:43.636 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:43.636 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:13:43.636 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:43.636 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:43.636 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:43.636 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:43.636 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:43.636 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:43.636 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:43.636 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:43.636 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:43.636 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:43.636 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:13:43.636 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:13:43.636 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:43.636 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:43.636 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:43.636 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:43.636 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:43.636 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:43.636 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:43.636 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:43.636 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.636 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.636 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.636 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:13:43.636 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.636 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:13:43.636 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:43.636 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:43.636 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:43.636 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:43.636 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:43.636 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:43.636 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:43.636 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:43.636 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:13:43.636 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:13:43.636 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:43.636 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:13:43.636 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:43.636 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:43.636 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:43.636 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:43.636 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:43.636 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:43.636 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:43.636 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:43.636 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:43.636 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:43.636 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:43.636 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:43.636 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:43.636 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:43.636 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:43.636 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:43.636 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:43.636 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:43.636 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:43.636 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:43.636 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:43.636 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:43.636 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:43.636 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:43.636 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:43.636 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:43.636 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:43.636 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:43.636 Cannot find device "nvmf_tgt_br" 00:13:43.636 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # true 00:13:43.636 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:43.636 Cannot find device "nvmf_tgt_br2" 00:13:43.636 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # true 00:13:43.636 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:43.636 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:43.636 Cannot find device "nvmf_tgt_br" 00:13:43.636 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # true 00:13:43.637 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:43.637 Cannot find device "nvmf_tgt_br2" 00:13:43.637 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # true 00:13:43.637 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:43.896 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:43.896 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:43.896 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:43.896 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:13:43.896 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:43.896 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:43.896 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:13:43.896 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:43.896 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:43.896 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:43.896 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:43.896 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:43.896 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:43.896 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:43.896 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:43.896 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:43.896 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:43.896 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:43.896 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:43.896 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:43.896 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:43.896 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:43.896 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:43.896 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:43.896 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:43.896 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:43.896 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:43.896 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:43.896 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:43.896 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:43.896 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:43.896 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:43.896 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:13:43.896 00:13:43.896 --- 10.0.0.2 ping statistics --- 00:13:43.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.896 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:13:43.896 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:43.896 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:43.896 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:13:43.896 00:13:43.896 --- 10.0.0.3 ping statistics --- 00:13:43.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.896 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:13:43.896 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:43.896 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:43.896 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:13:43.896 00:13:43.896 --- 10.0.0.1 ping statistics --- 00:13:43.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.896 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:13:43.896 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:43.896 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@433 -- # return 0 00:13:43.896 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:43.896 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:43.896 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:43.896 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:43.896 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:43.896 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:43.896 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:44.159 08:55:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:13:44.159 08:55:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:44.159 08:55:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:44.159 08:55:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:44.159 08:55:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=68925 00:13:44.159 08:55:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 68925 00:13:44.159 08:55:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:44.159 08:55:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 68925 ']' 00:13:44.159 08:55:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:44.159 08:55:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:44.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:44.159 08:55:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:44.159 08:55:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:44.159 08:55:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:44.159 [2024-07-25 08:55:51.139081] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:44.159 [2024-07-25 08:55:51.139269] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:44.454 [2024-07-25 08:55:51.320522] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:44.712 [2024-07-25 08:55:51.606508] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:44.712 [2024-07-25 08:55:51.606597] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:44.712 [2024-07-25 08:55:51.606615] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:44.712 [2024-07-25 08:55:51.606630] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:44.712 [2024-07-25 08:55:51.606643] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:44.712 [2024-07-25 08:55:51.606689] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:44.712 [2024-07-25 08:55:51.812002] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:44.971 08:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:44.971 08:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:13:44.971 08:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:44.971 08:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:44.971 08:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:45.230 08:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:45.230 08:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:45.230 08:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.230 08:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:45.230 [2024-07-25 08:55:52.125390] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:45.230 08:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.230 08:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:45.230 08:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.230 08:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:45.231 Malloc0 00:13:45.231 08:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.231 08:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:45.231 08:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.231 08:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:45.231 08:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.231 08:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:45.231 08:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.231 08:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:45.231 08:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.231 08:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:45.231 08:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.231 08:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:45.231 [2024-07-25 08:55:52.238803] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:45.231 08:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.231 08:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=68957 00:13:45.231 08:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:13:45.231 08:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:45.231 08:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 68957 /var/tmp/bdevperf.sock 00:13:45.231 08:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 68957 ']' 00:13:45.231 08:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:45.231 08:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:45.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:45.231 08:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:45.231 08:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:45.231 08:55:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:45.231 [2024-07-25 08:55:52.334858] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:45.231 [2024-07-25 08:55:52.335044] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68957 ] 00:13:45.490 [2024-07-25 08:55:52.498551] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:45.749 [2024-07-25 08:55:52.734679] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:46.008 [2024-07-25 08:55:52.938603] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:46.268 08:55:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:46.268 08:55:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:13:46.268 08:55:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:13:46.268 08:55:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.268 08:55:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:46.268 NVMe0n1 00:13:46.268 08:55:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.268 08:55:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:46.526 Running I/O for 10 seconds... 00:13:56.498 00:13:56.498 Latency(us) 00:13:56.498 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:56.498 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:13:56.498 Verification LBA range: start 0x0 length 0x4000 00:13:56.498 NVMe0n1 : 10.11 5926.12 23.15 0.00 0.00 171669.08 27644.28 114390.11 00:13:56.498 =================================================================================================================== 00:13:56.498 Total : 5926.12 23.15 0.00 0.00 171669.08 27644.28 114390.11 00:13:56.498 0 00:13:56.498 08:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 68957 00:13:56.498 08:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 68957 ']' 00:13:56.498 08:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 68957 00:13:56.498 08:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:13:56.498 08:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:56.498 08:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68957 00:13:56.756 08:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:56.756 killing process with pid 68957 00:13:56.756 08:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:56.756 08:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68957' 00:13:56.756 Received shutdown signal, test time was about 10.000000 seconds 00:13:56.756 00:13:56.756 Latency(us) 00:13:56.756 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:56.756 =================================================================================================================== 00:13:56.756 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:56.756 08:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 68957 00:13:56.756 08:56:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 68957 00:13:58.151 08:56:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:58.151 08:56:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:13:58.151 08:56:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:58.151 08:56:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:13:58.151 08:56:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:58.151 08:56:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:13:58.151 08:56:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:58.151 08:56:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:58.151 rmmod nvme_tcp 00:13:58.151 rmmod nvme_fabrics 00:13:58.151 rmmod nvme_keyring 00:13:58.151 08:56:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:58.151 08:56:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:13:58.151 08:56:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:13:58.151 08:56:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 68925 ']' 00:13:58.151 08:56:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 68925 00:13:58.151 08:56:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 68925 ']' 00:13:58.151 08:56:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 68925 00:13:58.151 08:56:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:13:58.151 08:56:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:58.151 08:56:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68925 00:13:58.151 08:56:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:58.151 08:56:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:58.151 killing process with pid 68925 00:13:58.151 08:56:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68925' 00:13:58.151 08:56:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 68925 00:13:58.151 08:56:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 68925 00:13:59.527 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:59.527 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:59.527 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:59.527 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:59.527 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:59.527 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:59.527 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:59.527 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:59.527 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:59.527 00:13:59.527 real 0m15.855s 00:13:59.527 user 0m26.673s 00:13:59.527 sys 0m2.348s 00:13:59.527 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:59.527 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:59.527 ************************************ 00:13:59.527 END TEST nvmf_queue_depth 00:13:59.527 ************************************ 00:13:59.527 08:56:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:13:59.527 08:56:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:59.527 08:56:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:59.527 08:56:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:59.527 ************************************ 00:13:59.527 START TEST nvmf_target_multipath 00:13:59.527 ************************************ 00:13:59.527 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:13:59.527 * Looking for test storage... 00:13:59.527 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:59.527 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:59.527 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:13:59.527 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:59.527 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:59.527 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:59.527 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:59.527 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:59.527 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:59.527 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:59.527 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:59.527 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:59.527 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:59.527 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:13:59.527 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:13:59.527 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:59.527 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:59.527 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:59.527 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:59.527 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:59.527 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:59.527 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:59.527 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:59.527 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.528 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.528 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.528 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:13:59.528 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.528 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:13:59.528 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:59.528 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:59.528 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:59.528 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:59.528 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:59.528 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:59.528 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:59.528 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:59.528 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:59.528 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:59.528 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:13:59.528 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:59.528 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:13:59.528 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:59.528 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:59.528 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:59.528 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:59.528 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:59.528 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:59.528 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:59.528 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:59.528 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:59.528 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:59.528 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:59.528 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:59.528 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:59.528 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:59.528 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:59.528 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:59.528 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:59.528 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:59.528 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:59.528 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:59.528 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:59.528 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:59.528 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:59.528 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:59.528 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:59.528 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:59.528 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:59.528 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:59.528 Cannot find device "nvmf_tgt_br" 00:13:59.528 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # true 00:13:59.528 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:59.528 Cannot find device "nvmf_tgt_br2" 00:13:59.528 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # true 00:13:59.528 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:59.528 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:59.528 Cannot find device "nvmf_tgt_br" 00:13:59.528 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # true 00:13:59.528 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:59.528 Cannot find device "nvmf_tgt_br2" 00:13:59.528 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # true 00:13:59.528 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:59.787 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:59.787 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:59.787 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:59.787 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:13:59.787 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:59.787 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:59.787 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:13:59.787 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:59.787 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:59.787 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:59.787 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:59.787 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:59.787 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:59.787 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:59.787 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:59.787 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:59.787 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:59.787 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:59.787 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:59.787 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:59.787 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:59.787 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:59.787 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:59.787 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:59.787 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:59.787 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:59.787 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:59.787 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:59.787 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:59.787 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:59.787 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:59.787 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:59.787 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:13:59.787 00:13:59.787 --- 10.0.0.2 ping statistics --- 00:13:59.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:59.787 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:13:59.787 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:59.787 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:59.787 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:13:59.787 00:13:59.787 --- 10.0.0.3 ping statistics --- 00:13:59.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:59.787 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:13:59.787 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:59.787 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:59.787 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:13:59.787 00:13:59.787 --- 10.0.0.1 ping statistics --- 00:13:59.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:59.787 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:13:59.787 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:59.787 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@433 -- # return 0 00:13:59.787 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:59.787 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:59.787 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:59.787 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:59.787 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:59.787 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:59.787 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:00.046 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:14:00.046 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:14:00.046 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:14:00.046 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:00.046 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:00.046 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:14:00.046 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@481 -- # nvmfpid=69313 00:14:00.046 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # waitforlisten 69313 00:14:00.046 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@831 -- # '[' -z 69313 ']' 00:14:00.046 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:00.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:00.046 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:00.046 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:00.046 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:00.046 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:00.046 08:56:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:14:00.046 [2024-07-25 08:56:07.039537] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:00.046 [2024-07-25 08:56:07.039711] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:00.304 [2024-07-25 08:56:07.220404] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:00.563 [2024-07-25 08:56:07.497556] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:00.563 [2024-07-25 08:56:07.497621] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:00.563 [2024-07-25 08:56:07.497639] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:00.563 [2024-07-25 08:56:07.497655] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:00.563 [2024-07-25 08:56:07.497671] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:00.563 [2024-07-25 08:56:07.497862] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:00.563 [2024-07-25 08:56:07.497960] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:00.563 [2024-07-25 08:56:07.498730] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:00.563 [2024-07-25 08:56:07.498745] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:00.822 [2024-07-25 08:56:07.704587] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:01.080 08:56:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:01.080 08:56:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # return 0 00:14:01.080 08:56:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:01.080 08:56:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:01.080 08:56:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:14:01.080 08:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:01.080 08:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:01.338 [2024-07-25 08:56:08.257157] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:01.338 08:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:14:01.597 Malloc0 00:14:01.597 08:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:14:01.855 08:56:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:02.114 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:02.379 [2024-07-25 08:56:09.364625] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:02.379 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:02.637 [2024-07-25 08:56:09.588788] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:02.637 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid=a4705431-95c9-4bc1-9185-4a8233d2d7f5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:14:02.637 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid=a4705431-95c9-4bc1-9185-4a8233d2d7f5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:14:02.895 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:14:02.895 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # local i=0 00:14:02.895 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:02.895 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:02.895 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # sleep 2 00:14:04.799 08:56:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:04.799 08:56:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:04.799 08:56:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:04.799 08:56:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:04.799 08:56:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:04.799 08:56:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # return 0 00:14:04.799 08:56:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:14:04.799 08:56:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:14:04.799 08:56:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:14:04.799 08:56:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:14:04.799 08:56:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:14:04.799 08:56:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:14:04.799 08:56:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:14:04.799 08:56:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:14:04.799 08:56:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:14:04.799 08:56:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:14:04.799 08:56:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:14:04.799 08:56:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:14:04.799 08:56:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:14:04.799 08:56:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:14:04.799 08:56:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:14:04.799 08:56:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:14:04.799 08:56:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:14:04.799 08:56:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:14:04.799 08:56:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:14:04.799 08:56:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:14:04.799 08:56:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:14:04.799 08:56:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:14:04.799 08:56:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:14:04.799 08:56:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:14:04.799 08:56:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:14:04.799 08:56:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:14:04.799 08:56:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=69403 00:14:04.799 08:56:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:14:04.799 08:56:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:14:05.057 [global] 00:14:05.057 thread=1 00:14:05.057 invalidate=1 00:14:05.057 rw=randrw 00:14:05.057 time_based=1 00:14:05.057 runtime=6 00:14:05.057 ioengine=libaio 00:14:05.057 direct=1 00:14:05.057 bs=4096 00:14:05.057 iodepth=128 00:14:05.057 norandommap=0 00:14:05.057 numjobs=1 00:14:05.057 00:14:05.057 verify_dump=1 00:14:05.057 verify_backlog=512 00:14:05.057 verify_state_save=0 00:14:05.057 do_verify=1 00:14:05.057 verify=crc32c-intel 00:14:05.057 [job0] 00:14:05.057 filename=/dev/nvme0n1 00:14:05.057 Could not set queue depth (nvme0n1) 00:14:05.057 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:05.057 fio-3.35 00:14:05.057 Starting 1 thread 00:14:05.992 08:56:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:14:06.250 08:56:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:14:06.507 08:56:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:14:06.507 08:56:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:14:06.507 08:56:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:14:06.507 08:56:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:14:06.507 08:56:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:14:06.507 08:56:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:14:06.507 08:56:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:14:06.507 08:56:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:14:06.507 08:56:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:14:06.507 08:56:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:14:06.507 08:56:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:14:06.507 08:56:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:14:06.507 08:56:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:14:06.765 08:56:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:14:07.023 08:56:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:14:07.023 08:56:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:14:07.023 08:56:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:14:07.023 08:56:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:14:07.023 08:56:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:14:07.023 08:56:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:14:07.023 08:56:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:14:07.023 08:56:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:14:07.023 08:56:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:14:07.023 08:56:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:14:07.024 08:56:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:14:07.024 08:56:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:14:07.024 08:56:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 69403 00:14:11.208 00:14:11.208 job0: (groupid=0, jobs=1): err= 0: pid=69424: Thu Jul 25 08:56:18 2024 00:14:11.208 read: IOPS=8152, BW=31.8MiB/s (33.4MB/s)(191MiB/6003msec) 00:14:11.208 slat (usec): min=6, max=9955, avg=72.49, stdev=286.89 00:14:11.208 clat (usec): min=2219, max=19882, avg=10566.73, stdev=1722.55 00:14:11.208 lat (usec): min=2230, max=19920, avg=10639.22, stdev=1725.27 00:14:11.208 clat percentiles (usec): 00:14:11.208 | 1.00th=[ 5800], 5.00th=[ 8356], 10.00th=[ 9110], 20.00th=[ 9765], 00:14:11.208 | 30.00th=[10028], 40.00th=[10159], 50.00th=[10290], 60.00th=[10552], 00:14:11.208 | 70.00th=[10814], 80.00th=[11207], 90.00th=[11994], 95.00th=[14615], 00:14:11.208 | 99.00th=[16450], 99.50th=[16909], 99.90th=[17433], 99.95th=[17695], 00:14:11.208 | 99.99th=[18744] 00:14:11.209 bw ( KiB/s): min= 5672, max=22136, per=55.92%, avg=18236.27, stdev=5233.96, samples=11 00:14:11.209 iops : min= 1418, max= 5534, avg=4559.00, stdev=1308.55, samples=11 00:14:11.209 write: IOPS=5069, BW=19.8MiB/s (20.8MB/s)(103MiB/5203msec); 0 zone resets 00:14:11.209 slat (usec): min=14, max=2519, avg=82.04, stdev=218.01 00:14:11.209 clat (usec): min=1249, max=18494, avg=9269.57, stdev=1534.16 00:14:11.209 lat (usec): min=1278, max=18517, avg=9351.61, stdev=1538.64 00:14:11.209 clat percentiles (usec): 00:14:11.209 | 1.00th=[ 4359], 5.00th=[ 5735], 10.00th=[ 7701], 20.00th=[ 8717], 00:14:11.209 | 30.00th=[ 8979], 40.00th=[ 9241], 50.00th=[ 9503], 60.00th=[ 9634], 00:14:11.209 | 70.00th=[ 9896], 80.00th=[10159], 90.00th=[10552], 95.00th=[10945], 00:14:11.209 | 99.00th=[14091], 99.50th=[14877], 99.90th=[16188], 99.95th=[16450], 00:14:11.209 | 99.99th=[18482] 00:14:11.209 bw ( KiB/s): min= 5912, max=21832, per=89.91%, avg=18234.36, stdev=5044.86, samples=11 00:14:11.209 iops : min= 1478, max= 5458, avg=4558.55, stdev=1261.24, samples=11 00:14:11.209 lat (msec) : 2=0.01%, 4=0.23%, 10=44.18%, 20=55.59% 00:14:11.209 cpu : usr=4.77%, sys=18.26%, ctx=4416, majf=0, minf=133 00:14:11.209 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:14:11.209 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:11.209 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:11.209 issued rwts: total=48941,26379,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:11.209 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:11.209 00:14:11.209 Run status group 0 (all jobs): 00:14:11.209 READ: bw=31.8MiB/s (33.4MB/s), 31.8MiB/s-31.8MiB/s (33.4MB/s-33.4MB/s), io=191MiB (200MB), run=6003-6003msec 00:14:11.209 WRITE: bw=19.8MiB/s (20.8MB/s), 19.8MiB/s-19.8MiB/s (20.8MB/s-20.8MB/s), io=103MiB (108MB), run=5203-5203msec 00:14:11.209 00:14:11.209 Disk stats (read/write): 00:14:11.209 nvme0n1: ios=47787/26379, merge=0/0, ticks=487599/231359, in_queue=718958, util=98.50% 00:14:11.209 08:56:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:14:11.467 08:56:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:14:11.724 08:56:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:14:11.724 08:56:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:14:11.724 08:56:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:14:11.724 08:56:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:14:11.724 08:56:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:14:11.724 08:56:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:14:11.724 08:56:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:14:11.724 08:56:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:14:11.724 08:56:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:14:11.724 08:56:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:14:11.724 08:56:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:14:11.724 08:56:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:14:11.724 08:56:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:14:11.724 08:56:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=69498 00:14:11.724 08:56:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:14:11.724 08:56:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:14:11.724 [global] 00:14:11.724 thread=1 00:14:11.724 invalidate=1 00:14:11.724 rw=randrw 00:14:11.724 time_based=1 00:14:11.724 runtime=6 00:14:11.724 ioengine=libaio 00:14:11.724 direct=1 00:14:11.724 bs=4096 00:14:11.724 iodepth=128 00:14:11.724 norandommap=0 00:14:11.724 numjobs=1 00:14:11.724 00:14:11.724 verify_dump=1 00:14:11.724 verify_backlog=512 00:14:11.724 verify_state_save=0 00:14:11.724 do_verify=1 00:14:11.724 verify=crc32c-intel 00:14:11.725 [job0] 00:14:11.725 filename=/dev/nvme0n1 00:14:11.725 Could not set queue depth (nvme0n1) 00:14:11.982 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:11.982 fio-3.35 00:14:11.982 Starting 1 thread 00:14:12.916 08:56:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:14:13.174 08:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:14:13.433 08:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:14:13.433 08:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:14:13.433 08:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:14:13.433 08:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:14:13.433 08:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:14:13.433 08:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:14:13.433 08:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:14:13.433 08:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:14:13.433 08:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:14:13.433 08:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:14:13.433 08:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:14:13.433 08:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:14:13.433 08:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:14:13.694 08:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:14:13.951 08:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:14:13.951 08:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:14:13.951 08:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:14:13.951 08:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:14:13.951 08:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:14:13.951 08:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:14:13.951 08:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:14:13.951 08:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:14:13.951 08:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:14:13.951 08:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:14:13.951 08:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:14:13.951 08:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:14:13.951 08:56:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 69498 00:14:18.135 00:14:18.135 job0: (groupid=0, jobs=1): err= 0: pid=69519: Thu Jul 25 08:56:25 2024 00:14:18.135 read: IOPS=9333, BW=36.5MiB/s (38.2MB/s)(219MiB/6005msec) 00:14:18.135 slat (usec): min=6, max=10797, avg=56.33, stdev=256.87 00:14:18.135 clat (usec): min=309, max=21430, avg=9639.99, stdev=2937.77 00:14:18.135 lat (usec): min=320, max=21465, avg=9696.32, stdev=2955.44 00:14:18.135 clat percentiles (usec): 00:14:18.135 | 1.00th=[ 1696], 5.00th=[ 3589], 10.00th=[ 5080], 20.00th=[ 7767], 00:14:18.135 | 30.00th=[ 9372], 40.00th=[ 9896], 50.00th=[10290], 60.00th=[10421], 00:14:18.135 | 70.00th=[10814], 80.00th=[11207], 90.00th=[12125], 95.00th=[14746], 00:14:18.135 | 99.00th=[16450], 99.50th=[16909], 99.90th=[17695], 99.95th=[18220], 00:14:18.135 | 99.99th=[19268] 00:14:18.135 bw ( KiB/s): min= 4854, max=35792, per=50.77%, avg=18954.00, stdev=8475.61, samples=11 00:14:18.135 iops : min= 1213, max= 8948, avg=4738.45, stdev=2118.98, samples=11 00:14:18.135 write: IOPS=5406, BW=21.1MiB/s (22.1MB/s)(110MiB/5206msec); 0 zone resets 00:14:18.135 slat (usec): min=12, max=3191, avg=61.15, stdev=175.11 00:14:18.135 clat (usec): min=985, max=18149, avg=7897.42, stdev=2684.03 00:14:18.135 lat (usec): min=1016, max=18187, avg=7958.57, stdev=2705.32 00:14:18.135 clat percentiles (usec): 00:14:18.135 | 1.00th=[ 2040], 5.00th=[ 2966], 10.00th=[ 3687], 20.00th=[ 5080], 00:14:18.135 | 30.00th=[ 6456], 40.00th=[ 8160], 50.00th=[ 8848], 60.00th=[ 9241], 00:14:18.135 | 70.00th=[ 9634], 80.00th=[10028], 90.00th=[10421], 95.00th=[10945], 00:14:18.135 | 99.00th=[13960], 99.50th=[14746], 99.90th=[16188], 99.95th=[16712], 00:14:18.135 | 99.99th=[17433] 00:14:18.135 bw ( KiB/s): min= 5245, max=35193, per=87.73%, avg=18971.45, stdev=8370.66, samples=11 00:14:18.135 iops : min= 1311, max= 8798, avg=4742.82, stdev=2092.66, samples=11 00:14:18.135 lat (usec) : 500=0.03%, 750=0.09%, 1000=0.12% 00:14:18.135 lat (msec) : 2=0.97%, 4=7.19%, 10=46.71%, 20=44.89%, 50=0.01% 00:14:18.135 cpu : usr=4.76%, sys=19.49%, ctx=4962, majf=0, minf=96 00:14:18.135 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:14:18.135 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:18.135 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:18.135 issued rwts: total=56046,28144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:18.135 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:18.135 00:14:18.135 Run status group 0 (all jobs): 00:14:18.135 READ: bw=36.5MiB/s (38.2MB/s), 36.5MiB/s-36.5MiB/s (38.2MB/s-38.2MB/s), io=219MiB (230MB), run=6005-6005msec 00:14:18.135 WRITE: bw=21.1MiB/s (22.1MB/s), 21.1MiB/s-21.1MiB/s (22.1MB/s-22.1MB/s), io=110MiB (115MB), run=5206-5206msec 00:14:18.135 00:14:18.135 Disk stats (read/write): 00:14:18.135 nvme0n1: ios=55341/27632, merge=0/0, ticks=513674/206222, in_queue=719896, util=98.72% 00:14:18.135 08:56:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:18.135 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:14:18.135 08:56:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:18.135 08:56:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # local i=0 00:14:18.135 08:56:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:18.135 08:56:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:18.135 08:56:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:18.135 08:56:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:18.135 08:56:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # return 0 00:14:18.135 08:56:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:18.393 08:56:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:14:18.393 08:56:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:14:18.393 08:56:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:14:18.393 08:56:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:14:18.393 08:56:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:18.393 08:56:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:14:18.393 08:56:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:18.393 08:56:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:14:18.393 08:56:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:18.393 08:56:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:18.393 rmmod nvme_tcp 00:14:18.393 rmmod nvme_fabrics 00:14:18.393 rmmod nvme_keyring 00:14:18.393 08:56:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:18.393 08:56:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:14:18.393 08:56:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:14:18.393 08:56:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n 69313 ']' 00:14:18.393 08:56:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@490 -- # killprocess 69313 00:14:18.393 08:56:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@950 -- # '[' -z 69313 ']' 00:14:18.393 08:56:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # kill -0 69313 00:14:18.393 08:56:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@955 -- # uname 00:14:18.393 08:56:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:18.393 08:56:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69313 00:14:18.699 08:56:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:18.699 08:56:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:18.699 killing process with pid 69313 00:14:18.699 08:56:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69313' 00:14:18.699 08:56:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@969 -- # kill 69313 00:14:18.699 08:56:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@974 -- # wait 69313 00:14:20.072 08:56:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:20.072 08:56:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:20.072 08:56:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:20.072 08:56:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:20.072 08:56:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:20.072 08:56:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:20.072 08:56:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:20.072 08:56:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:20.072 08:56:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:20.072 00:14:20.072 real 0m20.499s 00:14:20.072 user 1m15.132s 00:14:20.072 sys 0m9.062s 00:14:20.072 08:56:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:20.072 08:56:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:14:20.072 ************************************ 00:14:20.072 END TEST nvmf_target_multipath 00:14:20.073 ************************************ 00:14:20.073 08:56:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:14:20.073 08:56:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:20.073 08:56:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:20.073 08:56:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:20.073 ************************************ 00:14:20.073 START TEST nvmf_zcopy 00:14:20.073 ************************************ 00:14:20.073 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:14:20.073 * Looking for test storage... 00:14:20.073 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:20.073 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:20.073 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:14:20.073 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:20.073 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:20.073 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:20.073 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:20.073 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:20.073 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:20.073 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:20.073 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:20.073 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:20.073 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:20.073 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:14:20.073 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:14:20.073 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:20.073 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:20.073 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:20.073 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:20.073 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:20.073 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:20.073 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:20.073 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:20.073 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.073 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.073 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.073 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:14:20.073 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.073 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:14:20.073 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:20.073 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:20.073 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:20.073 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:20.073 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:20.073 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:20.073 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:20.073 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:20.073 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:14:20.073 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:20.073 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:20.073 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:20.073 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:20.073 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:20.073 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:20.073 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:20.073 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:20.073 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:20.073 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:20.073 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:20.073 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:20.073 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:20.073 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:20.073 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:20.073 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:20.073 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:20.073 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:20.073 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:20.073 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:20.073 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:20.073 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:20.073 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:20.073 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:20.073 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:20.073 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:20.073 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:20.073 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:20.073 Cannot find device "nvmf_tgt_br" 00:14:20.073 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # true 00:14:20.073 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:20.073 Cannot find device "nvmf_tgt_br2" 00:14:20.073 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # true 00:14:20.073 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:20.073 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:20.073 Cannot find device "nvmf_tgt_br" 00:14:20.073 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # true 00:14:20.073 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:20.073 Cannot find device "nvmf_tgt_br2" 00:14:20.074 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # true 00:14:20.074 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:20.331 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:20.331 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:20.331 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:20.331 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:14:20.331 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:20.331 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:20.331 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:14:20.331 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:20.331 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:20.331 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:20.331 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:20.331 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:20.331 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:20.331 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:20.331 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:20.331 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:20.331 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:20.331 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:20.331 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:20.331 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:20.331 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:20.331 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:20.331 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:20.331 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:20.331 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:20.331 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:20.331 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:20.331 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:20.331 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:20.331 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:20.331 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:20.331 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:20.331 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:14:20.331 00:14:20.331 --- 10.0.0.2 ping statistics --- 00:14:20.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:20.331 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:14:20.331 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:20.331 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:20.331 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.032 ms 00:14:20.331 00:14:20.331 --- 10.0.0.3 ping statistics --- 00:14:20.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:20.332 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:14:20.332 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:20.332 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:20.332 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:14:20.332 00:14:20.332 --- 10.0.0.1 ping statistics --- 00:14:20.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:20.332 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:14:20.332 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:20.332 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@433 -- # return 0 00:14:20.332 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:20.332 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:20.332 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:20.332 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:20.332 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:20.332 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:20.332 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:20.332 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:14:20.332 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:20.332 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:20.332 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:20.589 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=69781 00:14:20.589 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:20.589 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 69781 00:14:20.589 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 69781 ']' 00:14:20.589 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:20.589 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:20.589 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:20.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:20.590 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:20.590 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:20.590 [2024-07-25 08:56:27.571604] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:20.590 [2024-07-25 08:56:27.571778] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:20.848 [2024-07-25 08:56:27.751903] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:21.106 [2024-07-25 08:56:28.039035] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:21.106 [2024-07-25 08:56:28.039109] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:21.106 [2024-07-25 08:56:28.039142] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:21.106 [2024-07-25 08:56:28.039172] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:21.106 [2024-07-25 08:56:28.039194] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:21.106 [2024-07-25 08:56:28.039268] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:21.364 [2024-07-25 08:56:28.267802] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:21.623 08:56:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:21.623 08:56:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:14:21.623 08:56:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:21.623 08:56:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:21.623 08:56:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:21.623 08:56:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:21.623 08:56:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:14:21.623 08:56:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:14:21.623 08:56:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.623 08:56:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:21.623 [2024-07-25 08:56:28.618388] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:21.623 08:56:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.623 08:56:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:21.623 08:56:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.623 08:56:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:21.623 08:56:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.623 08:56:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:21.623 08:56:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.623 08:56:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:21.623 [2024-07-25 08:56:28.634525] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:21.623 08:56:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.623 08:56:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:21.623 08:56:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.623 08:56:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:21.623 08:56:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.623 08:56:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:14:21.623 08:56:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.623 08:56:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:21.623 malloc0 00:14:21.623 08:56:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.623 08:56:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:21.623 08:56:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.623 08:56:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:21.623 08:56:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.623 08:56:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:14:21.623 08:56:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:14:21.623 08:56:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:14:21.623 08:56:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:14:21.623 08:56:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:21.623 08:56:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:21.623 { 00:14:21.623 "params": { 00:14:21.623 "name": "Nvme$subsystem", 00:14:21.623 "trtype": "$TEST_TRANSPORT", 00:14:21.623 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:21.623 "adrfam": "ipv4", 00:14:21.623 "trsvcid": "$NVMF_PORT", 00:14:21.623 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:21.623 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:21.623 "hdgst": ${hdgst:-false}, 00:14:21.623 "ddgst": ${ddgst:-false} 00:14:21.623 }, 00:14:21.623 "method": "bdev_nvme_attach_controller" 00:14:21.623 } 00:14:21.623 EOF 00:14:21.623 )") 00:14:21.623 08:56:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:14:21.623 08:56:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:14:21.623 08:56:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:14:21.623 08:56:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:21.623 "params": { 00:14:21.623 "name": "Nvme1", 00:14:21.623 "trtype": "tcp", 00:14:21.623 "traddr": "10.0.0.2", 00:14:21.623 "adrfam": "ipv4", 00:14:21.623 "trsvcid": "4420", 00:14:21.623 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:21.623 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:21.623 "hdgst": false, 00:14:21.623 "ddgst": false 00:14:21.623 }, 00:14:21.623 "method": "bdev_nvme_attach_controller" 00:14:21.623 }' 00:14:21.881 [2024-07-25 08:56:28.827986] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:21.881 [2024-07-25 08:56:28.828161] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69814 ] 00:14:22.139 [2024-07-25 08:56:29.008656] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:22.139 [2024-07-25 08:56:29.249998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:22.399 [2024-07-25 08:56:29.463528] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:22.659 Running I/O for 10 seconds... 00:14:32.627 00:14:32.627 Latency(us) 00:14:32.627 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:32.627 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:14:32.627 Verification LBA range: start 0x0 length 0x1000 00:14:32.627 Nvme1n1 : 10.02 4443.14 34.71 0.00 0.00 28725.90 3619.37 38130.04 00:14:32.627 =================================================================================================================== 00:14:32.627 Total : 4443.14 34.71 0.00 0.00 28725.90 3619.37 38130.04 00:14:34.003 08:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=69948 00:14:34.003 08:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:14:34.003 08:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:34.003 08:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:14:34.003 08:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:14:34.003 08:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:14:34.003 08:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:14:34.003 08:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:34.003 08:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:34.003 { 00:14:34.003 "params": { 00:14:34.003 "name": "Nvme$subsystem", 00:14:34.003 "trtype": "$TEST_TRANSPORT", 00:14:34.003 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:34.003 "adrfam": "ipv4", 00:14:34.003 "trsvcid": "$NVMF_PORT", 00:14:34.003 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:34.003 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:34.003 "hdgst": ${hdgst:-false}, 00:14:34.003 "ddgst": ${ddgst:-false} 00:14:34.003 }, 00:14:34.003 "method": "bdev_nvme_attach_controller" 00:14:34.003 } 00:14:34.003 EOF 00:14:34.003 )") 00:14:34.003 [2024-07-25 08:56:40.911017] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.003 [2024-07-25 08:56:40.911083] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.003 08:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:14:34.003 08:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:14:34.003 08:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:14:34.003 08:56:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:34.003 "params": { 00:14:34.003 "name": "Nvme1", 00:14:34.003 "trtype": "tcp", 00:14:34.003 "traddr": "10.0.0.2", 00:14:34.003 "adrfam": "ipv4", 00:14:34.003 "trsvcid": "4420", 00:14:34.003 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:34.003 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:34.003 "hdgst": false, 00:14:34.003 "ddgst": false 00:14:34.003 }, 00:14:34.003 "method": "bdev_nvme_attach_controller" 00:14:34.003 }' 00:14:34.003 [2024-07-25 08:56:40.923026] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.003 [2024-07-25 08:56:40.923081] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.003 [2024-07-25 08:56:40.930974] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.003 [2024-07-25 08:56:40.931036] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.003 [2024-07-25 08:56:40.943018] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.003 [2024-07-25 08:56:40.943093] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.003 [2024-07-25 08:56:40.955005] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.003 [2024-07-25 08:56:40.955082] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.003 [2024-07-25 08:56:40.966983] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.003 [2024-07-25 08:56:40.967048] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.003 [2024-07-25 08:56:40.979025] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.003 [2024-07-25 08:56:40.979077] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.003 [2024-07-25 08:56:40.991041] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.003 [2024-07-25 08:56:40.991096] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.003 [2024-07-25 08:56:41.001889] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:34.003 [2024-07-25 08:56:41.002042] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69948 ] 00:14:34.003 [2024-07-25 08:56:41.002994] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.003 [2024-07-25 08:56:41.003038] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.003 [2024-07-25 08:56:41.015085] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.003 [2024-07-25 08:56:41.015155] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.003 [2024-07-25 08:56:41.027050] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.003 [2024-07-25 08:56:41.027112] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.003 [2024-07-25 08:56:41.039067] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.003 [2024-07-25 08:56:41.039126] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.003 [2024-07-25 08:56:41.051044] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.003 [2024-07-25 08:56:41.051113] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.003 [2024-07-25 08:56:41.063026] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.003 [2024-07-25 08:56:41.063095] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.003 [2024-07-25 08:56:41.075084] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.003 [2024-07-25 08:56:41.075160] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.003 [2024-07-25 08:56:41.087092] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.003 [2024-07-25 08:56:41.087162] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.003 [2024-07-25 08:56:41.099059] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.003 [2024-07-25 08:56:41.099132] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.003 [2024-07-25 08:56:41.111114] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.003 [2024-07-25 08:56:41.111198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.269 [2024-07-25 08:56:41.123047] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.269 [2024-07-25 08:56:41.123106] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.269 [2024-07-25 08:56:41.135086] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.269 [2024-07-25 08:56:41.135143] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.269 [2024-07-25 08:56:41.147081] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.269 [2024-07-25 08:56:41.147141] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.269 [2024-07-25 08:56:41.159118] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.269 [2024-07-25 08:56:41.159168] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.269 [2024-07-25 08:56:41.170586] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:34.269 [2024-07-25 08:56:41.171160] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.269 [2024-07-25 08:56:41.171208] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.269 [2024-07-25 08:56:41.183218] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.269 [2024-07-25 08:56:41.183281] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.269 [2024-07-25 08:56:41.195180] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.269 [2024-07-25 08:56:41.195239] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.269 [2024-07-25 08:56:41.207130] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.269 [2024-07-25 08:56:41.207173] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.269 [2024-07-25 08:56:41.219127] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.269 [2024-07-25 08:56:41.219195] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.269 [2024-07-25 08:56:41.231187] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.269 [2024-07-25 08:56:41.231242] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.269 [2024-07-25 08:56:41.243201] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.269 [2024-07-25 08:56:41.243263] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.269 [2024-07-25 08:56:41.255178] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.269 [2024-07-25 08:56:41.255235] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.269 [2024-07-25 08:56:41.267227] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.269 [2024-07-25 08:56:41.267291] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.269 [2024-07-25 08:56:41.279239] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.269 [2024-07-25 08:56:41.279312] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.269 [2024-07-25 08:56:41.291202] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.269 [2024-07-25 08:56:41.291262] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.269 [2024-07-25 08:56:41.303254] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.269 [2024-07-25 08:56:41.303316] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.269 [2024-07-25 08:56:41.315266] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.269 [2024-07-25 08:56:41.315327] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.269 [2024-07-25 08:56:41.327256] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.269 [2024-07-25 08:56:41.327311] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.269 [2024-07-25 08:56:41.339234] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.269 [2024-07-25 08:56:41.339291] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.269 [2024-07-25 08:56:41.351218] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.269 [2024-07-25 08:56:41.351270] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.269 [2024-07-25 08:56:41.363247] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.269 [2024-07-25 08:56:41.363304] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.269 [2024-07-25 08:56:41.375268] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.269 [2024-07-25 08:56:41.375345] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.527 [2024-07-25 08:56:41.387237] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.527 [2024-07-25 08:56:41.387302] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.527 [2024-07-25 08:56:41.399280] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.527 [2024-07-25 08:56:41.399339] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.527 [2024-07-25 08:56:41.411251] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.527 [2024-07-25 08:56:41.411321] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.527 [2024-07-25 08:56:41.420156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.527 [2024-07-25 08:56:41.423254] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.527 [2024-07-25 08:56:41.423306] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.527 [2024-07-25 08:56:41.435276] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.527 [2024-07-25 08:56:41.435342] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.527 [2024-07-25 08:56:41.447299] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.527 [2024-07-25 08:56:41.447371] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.527 [2024-07-25 08:56:41.459330] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.527 [2024-07-25 08:56:41.459393] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.527 [2024-07-25 08:56:41.471304] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.527 [2024-07-25 08:56:41.471363] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.527 [2024-07-25 08:56:41.483303] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.527 [2024-07-25 08:56:41.483367] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.527 [2024-07-25 08:56:41.495311] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.527 [2024-07-25 08:56:41.495369] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.527 [2024-07-25 08:56:41.507344] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.527 [2024-07-25 08:56:41.507418] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.527 [2024-07-25 08:56:41.519326] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.527 [2024-07-25 08:56:41.519389] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.527 [2024-07-25 08:56:41.531339] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.527 [2024-07-25 08:56:41.531399] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.527 [2024-07-25 08:56:41.543270] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.527 [2024-07-25 08:56:41.543345] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.527 [2024-07-25 08:56:41.555350] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.527 [2024-07-25 08:56:41.555405] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.527 [2024-07-25 08:56:41.567375] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.527 [2024-07-25 08:56:41.567428] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.527 [2024-07-25 08:56:41.579310] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.527 [2024-07-25 08:56:41.579376] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.527 [2024-07-25 08:56:41.591423] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.527 [2024-07-25 08:56:41.591487] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.527 [2024-07-25 08:56:41.603318] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.527 [2024-07-25 08:56:41.603376] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.527 [2024-07-25 08:56:41.615410] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.527 [2024-07-25 08:56:41.615466] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.527 [2024-07-25 08:56:41.627408] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.527 [2024-07-25 08:56:41.627468] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.527 [2024-07-25 08:56:41.637613] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:34.527 [2024-07-25 08:56:41.639336] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.527 [2024-07-25 08:56:41.639381] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.785 [2024-07-25 08:56:41.651447] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.785 [2024-07-25 08:56:41.651528] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.785 [2024-07-25 08:56:41.663364] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.785 [2024-07-25 08:56:41.663422] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.786 [2024-07-25 08:56:41.675354] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.786 [2024-07-25 08:56:41.675412] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.786 [2024-07-25 08:56:41.687389] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.786 [2024-07-25 08:56:41.687444] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.786 [2024-07-25 08:56:41.699357] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.786 [2024-07-25 08:56:41.699417] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.786 [2024-07-25 08:56:41.711398] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.786 [2024-07-25 08:56:41.711460] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.786 [2024-07-25 08:56:41.723409] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.786 [2024-07-25 08:56:41.723470] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.786 [2024-07-25 08:56:41.735432] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.786 [2024-07-25 08:56:41.735492] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.786 [2024-07-25 08:56:41.747404] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.786 [2024-07-25 08:56:41.747473] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.786 [2024-07-25 08:56:41.759397] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.786 [2024-07-25 08:56:41.759463] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.786 [2024-07-25 08:56:41.771434] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.786 [2024-07-25 08:56:41.771499] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.786 [2024-07-25 08:56:41.783569] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.786 [2024-07-25 08:56:41.783637] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.786 [2024-07-25 08:56:41.795492] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.786 [2024-07-25 08:56:41.795551] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.786 [2024-07-25 08:56:41.807576] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.786 [2024-07-25 08:56:41.807637] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.786 [2024-07-25 08:56:41.819560] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.786 [2024-07-25 08:56:41.819626] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.786 [2024-07-25 08:56:41.831564] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.786 [2024-07-25 08:56:41.831672] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.786 Running I/O for 5 seconds... 00:14:34.786 [2024-07-25 08:56:41.851614] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.786 [2024-07-25 08:56:41.851701] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.786 [2024-07-25 08:56:41.866917] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.786 [2024-07-25 08:56:41.866981] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.786 [2024-07-25 08:56:41.884830] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.786 [2024-07-25 08:56:41.884903] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.786 [2024-07-25 08:56:41.898492] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.786 [2024-07-25 08:56:41.898558] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.043 [2024-07-25 08:56:41.916791] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.043 [2024-07-25 08:56:41.916883] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.043 [2024-07-25 08:56:41.931292] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.043 [2024-07-25 08:56:41.931355] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.043 [2024-07-25 08:56:41.946782] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.043 [2024-07-25 08:56:41.946876] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.043 [2024-07-25 08:56:41.965639] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.043 [2024-07-25 08:56:41.965708] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.043 [2024-07-25 08:56:41.980455] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.043 [2024-07-25 08:56:41.980550] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.043 [2024-07-25 08:56:41.996336] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.043 [2024-07-25 08:56:41.996400] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.043 [2024-07-25 08:56:42.014475] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.043 [2024-07-25 08:56:42.014545] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.043 [2024-07-25 08:56:42.028694] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.043 [2024-07-25 08:56:42.028756] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.043 [2024-07-25 08:56:42.046488] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.043 [2024-07-25 08:56:42.046556] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.043 [2024-07-25 08:56:42.060852] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.043 [2024-07-25 08:56:42.060908] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.043 [2024-07-25 08:56:42.079807] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.044 [2024-07-25 08:56:42.079928] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.044 [2024-07-25 08:56:42.098111] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.044 [2024-07-25 08:56:42.098177] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.044 [2024-07-25 08:56:42.111122] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.044 [2024-07-25 08:56:42.111187] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.044 [2024-07-25 08:56:42.130250] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.044 [2024-07-25 08:56:42.130326] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.044 [2024-07-25 08:56:42.147024] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.044 [2024-07-25 08:56:42.147095] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.301 [2024-07-25 08:56:42.160617] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.301 [2024-07-25 08:56:42.160680] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.301 [2024-07-25 08:56:42.178691] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.301 [2024-07-25 08:56:42.178784] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.301 [2024-07-25 08:56:42.193505] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.301 [2024-07-25 08:56:42.193569] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.301 [2024-07-25 08:56:42.212127] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.301 [2024-07-25 08:56:42.212223] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.301 [2024-07-25 08:56:42.230255] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.301 [2024-07-25 08:56:42.230324] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.301 [2024-07-25 08:56:42.244024] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.301 [2024-07-25 08:56:42.244082] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.301 [2024-07-25 08:56:42.262331] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.301 [2024-07-25 08:56:42.262387] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.301 [2024-07-25 08:56:42.277022] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.301 [2024-07-25 08:56:42.277093] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.301 [2024-07-25 08:56:42.294721] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.301 [2024-07-25 08:56:42.294793] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.302 [2024-07-25 08:56:42.307967] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.302 [2024-07-25 08:56:42.308032] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.302 [2024-07-25 08:56:42.326664] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.302 [2024-07-25 08:56:42.326744] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.302 [2024-07-25 08:56:42.339405] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.302 [2024-07-25 08:56:42.339465] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.302 [2024-07-25 08:56:42.358479] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.302 [2024-07-25 08:56:42.358552] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.302 [2024-07-25 08:56:42.375732] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.302 [2024-07-25 08:56:42.375808] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.302 [2024-07-25 08:56:42.392571] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.302 [2024-07-25 08:56:42.392649] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.302 [2024-07-25 08:56:42.406907] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.302 [2024-07-25 08:56:42.406993] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.559 [2024-07-25 08:56:42.425773] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.559 [2024-07-25 08:56:42.425901] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.559 [2024-07-25 08:56:42.441193] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.559 [2024-07-25 08:56:42.441317] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.559 [2024-07-25 08:56:42.459393] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.559 [2024-07-25 08:56:42.459476] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.559 [2024-07-25 08:56:42.475872] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.560 [2024-07-25 08:56:42.475984] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.560 [2024-07-25 08:56:42.488991] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.560 [2024-07-25 08:56:42.489091] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.560 [2024-07-25 08:56:42.508284] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.560 [2024-07-25 08:56:42.508363] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.560 [2024-07-25 08:56:42.525353] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.560 [2024-07-25 08:56:42.525424] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.560 [2024-07-25 08:56:42.538406] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.560 [2024-07-25 08:56:42.538475] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.560 [2024-07-25 08:56:42.558553] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.560 [2024-07-25 08:56:42.558623] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.560 [2024-07-25 08:56:42.575486] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.560 [2024-07-25 08:56:42.575551] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.560 [2024-07-25 08:56:42.588511] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.560 [2024-07-25 08:56:42.588592] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.560 [2024-07-25 08:56:42.607434] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.560 [2024-07-25 08:56:42.607507] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.560 [2024-07-25 08:56:42.624140] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.560 [2024-07-25 08:56:42.624201] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.560 [2024-07-25 08:56:42.637395] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.560 [2024-07-25 08:56:42.637458] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.560 [2024-07-25 08:56:42.655900] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.560 [2024-07-25 08:56:42.655962] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.560 [2024-07-25 08:56:42.670848] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.560 [2024-07-25 08:56:42.670926] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.818 [2024-07-25 08:56:42.688724] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.818 [2024-07-25 08:56:42.688791] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.818 [2024-07-25 08:56:42.705811] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.818 [2024-07-25 08:56:42.705968] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.818 [2024-07-25 08:56:42.722075] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.818 [2024-07-25 08:56:42.722137] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.818 [2024-07-25 08:56:42.739387] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.818 [2024-07-25 08:56:42.739455] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.818 [2024-07-25 08:56:42.756051] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.818 [2024-07-25 08:56:42.756115] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.818 [2024-07-25 08:56:42.769268] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.818 [2024-07-25 08:56:42.769351] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.818 [2024-07-25 08:56:42.788912] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.818 [2024-07-25 08:56:42.788982] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.818 [2024-07-25 08:56:42.803432] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.818 [2024-07-25 08:56:42.803503] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.818 [2024-07-25 08:56:42.821286] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.818 [2024-07-25 08:56:42.821356] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.818 [2024-07-25 08:56:42.835370] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.818 [2024-07-25 08:56:42.835440] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.818 [2024-07-25 08:56:42.854045] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.818 [2024-07-25 08:56:42.854125] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.818 [2024-07-25 08:56:42.869007] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.818 [2024-07-25 08:56:42.869087] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.818 [2024-07-25 08:56:42.887012] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.818 [2024-07-25 08:56:42.887095] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.818 [2024-07-25 08:56:42.901542] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.818 [2024-07-25 08:56:42.901652] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.818 [2024-07-25 08:56:42.921324] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.818 [2024-07-25 08:56:42.921412] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.076 [2024-07-25 08:56:42.936984] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.076 [2024-07-25 08:56:42.937082] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.076 [2024-07-25 08:56:42.954204] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.076 [2024-07-25 08:56:42.954305] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.076 [2024-07-25 08:56:42.970641] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.076 [2024-07-25 08:56:42.970723] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.076 [2024-07-25 08:56:42.983546] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.076 [2024-07-25 08:56:42.983593] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.076 [2024-07-25 08:56:43.002668] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.076 [2024-07-25 08:56:43.002739] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.076 [2024-07-25 08:56:43.017836] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.076 [2024-07-25 08:56:43.017908] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.076 [2024-07-25 08:56:43.035407] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.076 [2024-07-25 08:56:43.035480] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.076 [2024-07-25 08:56:43.050460] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.076 [2024-07-25 08:56:43.050523] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.076 [2024-07-25 08:56:43.068433] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.076 [2024-07-25 08:56:43.068550] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.076 [2024-07-25 08:56:43.083398] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.076 [2024-07-25 08:56:43.083458] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.076 [2024-07-25 08:56:43.101273] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.076 [2024-07-25 08:56:43.101345] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.076 [2024-07-25 08:56:43.115433] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.076 [2024-07-25 08:56:43.115491] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.076 [2024-07-25 08:56:43.132562] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.076 [2024-07-25 08:56:43.132613] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.076 [2024-07-25 08:56:43.145342] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.076 [2024-07-25 08:56:43.145550] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.076 [2024-07-25 08:56:43.164045] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.076 [2024-07-25 08:56:43.164109] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.076 [2024-07-25 08:56:43.180681] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.076 [2024-07-25 08:56:43.180749] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.334 [2024-07-25 08:56:43.193591] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.334 [2024-07-25 08:56:43.193651] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.334 [2024-07-25 08:56:43.212923] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.334 [2024-07-25 08:56:43.212994] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.334 [2024-07-25 08:56:43.230505] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.334 [2024-07-25 08:56:43.230573] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.334 [2024-07-25 08:56:43.244021] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.334 [2024-07-25 08:56:43.244081] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.334 [2024-07-25 08:56:43.263731] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.334 [2024-07-25 08:56:43.263801] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.334 [2024-07-25 08:56:43.278208] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.334 [2024-07-25 08:56:43.278285] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.334 [2024-07-25 08:56:43.293552] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.334 [2024-07-25 08:56:43.293624] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.334 [2024-07-25 08:56:43.311586] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.334 [2024-07-25 08:56:43.311654] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.334 [2024-07-25 08:56:43.325502] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.334 [2024-07-25 08:56:43.325563] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.334 [2024-07-25 08:56:43.344454] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.334 [2024-07-25 08:56:43.344551] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.334 [2024-07-25 08:56:43.360033] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.334 [2024-07-25 08:56:43.360098] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.334 [2024-07-25 08:56:43.375985] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.334 [2024-07-25 08:56:43.376055] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.334 [2024-07-25 08:56:43.394113] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.334 [2024-07-25 08:56:43.394174] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.334 [2024-07-25 08:56:43.411569] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.334 [2024-07-25 08:56:43.411633] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.334 [2024-07-25 08:56:43.425587] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.334 [2024-07-25 08:56:43.425650] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.334 [2024-07-25 08:56:43.443533] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.334 [2024-07-25 08:56:43.443601] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.592 [2024-07-25 08:56:43.457891] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.592 [2024-07-25 08:56:43.457952] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.592 [2024-07-25 08:56:43.474761] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.592 [2024-07-25 08:56:43.474844] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.592 [2024-07-25 08:56:43.487094] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.592 [2024-07-25 08:56:43.487139] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.592 [2024-07-25 08:56:43.505701] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.592 [2024-07-25 08:56:43.505750] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.592 [2024-07-25 08:56:43.521799] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.592 [2024-07-25 08:56:43.521890] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.593 [2024-07-25 08:56:43.537916] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.593 [2024-07-25 08:56:43.537970] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.593 [2024-07-25 08:56:43.549420] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.593 [2024-07-25 08:56:43.549464] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.593 [2024-07-25 08:56:43.565498] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.593 [2024-07-25 08:56:43.565544] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.593 [2024-07-25 08:56:43.580958] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.593 [2024-07-25 08:56:43.581003] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.593 [2024-07-25 08:56:43.598174] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.593 [2024-07-25 08:56:43.598234] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.593 [2024-07-25 08:56:43.610716] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.593 [2024-07-25 08:56:43.610760] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.593 [2024-07-25 08:56:43.629188] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.593 [2024-07-25 08:56:43.629236] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.593 [2024-07-25 08:56:43.645471] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.593 [2024-07-25 08:56:43.645521] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.593 [2024-07-25 08:56:43.658463] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.593 [2024-07-25 08:56:43.658510] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.593 [2024-07-25 08:56:43.677366] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.593 [2024-07-25 08:56:43.677413] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.593 [2024-07-25 08:56:43.693541] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.593 [2024-07-25 08:56:43.693607] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.593 [2024-07-25 08:56:43.706373] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.593 [2024-07-25 08:56:43.706429] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.887 [2024-07-25 08:56:43.725480] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.887 [2024-07-25 08:56:43.725542] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.887 [2024-07-25 08:56:43.743212] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.887 [2024-07-25 08:56:43.743275] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.887 [2024-07-25 08:56:43.758784] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.887 [2024-07-25 08:56:43.758894] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.887 [2024-07-25 08:56:43.774299] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.887 [2024-07-25 08:56:43.774344] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.887 [2024-07-25 08:56:43.787594] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.887 [2024-07-25 08:56:43.787655] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.887 [2024-07-25 08:56:43.805282] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.887 [2024-07-25 08:56:43.805331] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.887 [2024-07-25 08:56:43.821698] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.887 [2024-07-25 08:56:43.821742] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.887 [2024-07-25 08:56:43.834319] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.887 [2024-07-25 08:56:43.834364] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.887 [2024-07-25 08:56:43.852573] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.887 [2024-07-25 08:56:43.852617] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.887 [2024-07-25 08:56:43.870489] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.887 [2024-07-25 08:56:43.870657] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.887 [2024-07-25 08:56:43.883385] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.887 [2024-07-25 08:56:43.883567] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.887 [2024-07-25 08:56:43.901963] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.887 [2024-07-25 08:56:43.902009] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.887 [2024-07-25 08:56:43.915774] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.887 [2024-07-25 08:56:43.915864] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.887 [2024-07-25 08:56:43.934659] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.887 [2024-07-25 08:56:43.934723] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.887 [2024-07-25 08:56:43.949414] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.887 [2024-07-25 08:56:43.949460] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.887 [2024-07-25 08:56:43.966236] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.887 [2024-07-25 08:56:43.966283] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.887 [2024-07-25 08:56:43.983226] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.887 [2024-07-25 08:56:43.983291] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.887 [2024-07-25 08:56:43.996679] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.887 [2024-07-25 08:56:43.996759] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.145 [2024-07-25 08:56:44.013750] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.145 [2024-07-25 08:56:44.013835] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.145 [2024-07-25 08:56:44.028799] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.145 [2024-07-25 08:56:44.028862] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.145 [2024-07-25 08:56:44.043847] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.145 [2024-07-25 08:56:44.043904] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.145 [2024-07-25 08:56:44.059523] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.145 [2024-07-25 08:56:44.059585] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.145 [2024-07-25 08:56:44.076643] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.145 [2024-07-25 08:56:44.076693] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.145 [2024-07-25 08:56:44.089444] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.145 [2024-07-25 08:56:44.089509] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.145 [2024-07-25 08:56:44.108956] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.145 [2024-07-25 08:56:44.109025] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.145 [2024-07-25 08:56:44.125866] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.145 [2024-07-25 08:56:44.125963] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.146 [2024-07-25 08:56:44.138644] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.146 [2024-07-25 08:56:44.138722] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.146 [2024-07-25 08:56:44.157081] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.146 [2024-07-25 08:56:44.157159] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.146 [2024-07-25 08:56:44.172947] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.146 [2024-07-25 08:56:44.173008] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.146 [2024-07-25 08:56:44.190478] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.146 [2024-07-25 08:56:44.190559] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.146 [2024-07-25 08:56:44.207690] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.146 [2024-07-25 08:56:44.207750] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.146 [2024-07-25 08:56:44.223274] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.146 [2024-07-25 08:56:44.223349] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.146 [2024-07-25 08:56:44.239763] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.146 [2024-07-25 08:56:44.239849] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.146 [2024-07-25 08:56:44.257258] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.146 [2024-07-25 08:56:44.257322] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.404 [2024-07-25 08:56:44.272387] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.404 [2024-07-25 08:56:44.272445] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.404 [2024-07-25 08:56:44.288583] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.404 [2024-07-25 08:56:44.288647] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.404 [2024-07-25 08:56:44.306252] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.404 [2024-07-25 08:56:44.306313] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.404 [2024-07-25 08:56:44.322334] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.404 [2024-07-25 08:56:44.322398] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.404 [2024-07-25 08:56:44.339803] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.404 [2024-07-25 08:56:44.339873] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.404 [2024-07-25 08:56:44.355123] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.404 [2024-07-25 08:56:44.355222] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.404 [2024-07-25 08:56:44.370641] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.404 [2024-07-25 08:56:44.370701] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.404 [2024-07-25 08:56:44.386073] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.404 [2024-07-25 08:56:44.386154] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.404 [2024-07-25 08:56:44.401206] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.404 [2024-07-25 08:56:44.401265] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.404 [2024-07-25 08:56:44.416620] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.404 [2024-07-25 08:56:44.416674] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.404 [2024-07-25 08:56:44.429949] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.404 [2024-07-25 08:56:44.430000] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.404 [2024-07-25 08:56:44.449185] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.404 [2024-07-25 08:56:44.449246] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.404 [2024-07-25 08:56:44.464034] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.404 [2024-07-25 08:56:44.464088] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.404 [2024-07-25 08:56:44.481807] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.404 [2024-07-25 08:56:44.481881] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.404 [2024-07-25 08:56:44.494777] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.404 [2024-07-25 08:56:44.494870] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.404 [2024-07-25 08:56:44.513755] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.404 [2024-07-25 08:56:44.513834] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.662 [2024-07-25 08:56:44.527600] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.662 [2024-07-25 08:56:44.527662] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.662 [2024-07-25 08:56:44.545999] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.662 [2024-07-25 08:56:44.546077] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.662 [2024-07-25 08:56:44.559970] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.662 [2024-07-25 08:56:44.560057] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.662 [2024-07-25 08:56:44.576782] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.662 [2024-07-25 08:56:44.576845] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.662 [2024-07-25 08:56:44.592854] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.662 [2024-07-25 08:56:44.592975] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.662 [2024-07-25 08:56:44.609599] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.662 [2024-07-25 08:56:44.609665] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.662 [2024-07-25 08:56:44.626632] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.662 [2024-07-25 08:56:44.626709] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.662 [2024-07-25 08:56:44.643514] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.662 [2024-07-25 08:56:44.643578] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.662 [2024-07-25 08:56:44.658752] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.662 [2024-07-25 08:56:44.658842] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.662 [2024-07-25 08:56:44.671226] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.662 [2024-07-25 08:56:44.671317] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.662 [2024-07-25 08:56:44.689772] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.662 [2024-07-25 08:56:44.689859] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.662 [2024-07-25 08:56:44.705718] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.662 [2024-07-25 08:56:44.705778] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.662 [2024-07-25 08:56:44.721481] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.662 [2024-07-25 08:56:44.721557] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.662 [2024-07-25 08:56:44.737358] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.662 [2024-07-25 08:56:44.737421] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.662 [2024-07-25 08:56:44.750899] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.662 [2024-07-25 08:56:44.750964] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.662 [2024-07-25 08:56:44.769364] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.662 [2024-07-25 08:56:44.769412] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.920 [2024-07-25 08:56:44.785923] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.920 [2024-07-25 08:56:44.785974] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.920 [2024-07-25 08:56:44.801503] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.920 [2024-07-25 08:56:44.801566] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.920 [2024-07-25 08:56:44.817594] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.920 [2024-07-25 08:56:44.817642] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.920 [2024-07-25 08:56:44.834873] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.920 [2024-07-25 08:56:44.834922] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.920 [2024-07-25 08:56:44.847383] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.920 [2024-07-25 08:56:44.847434] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.920 [2024-07-25 08:56:44.866815] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.920 [2024-07-25 08:56:44.866895] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.920 [2024-07-25 08:56:44.880758] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.920 [2024-07-25 08:56:44.880831] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.921 [2024-07-25 08:56:44.898879] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.921 [2024-07-25 08:56:44.898957] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.921 [2024-07-25 08:56:44.913206] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.921 [2024-07-25 08:56:44.913269] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.921 [2024-07-25 08:56:44.930720] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.921 [2024-07-25 08:56:44.930798] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.921 [2024-07-25 08:56:44.944845] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.921 [2024-07-25 08:56:44.944941] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.921 [2024-07-25 08:56:44.964135] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.921 [2024-07-25 08:56:44.964201] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.921 [2024-07-25 08:56:44.981310] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.921 [2024-07-25 08:56:44.981371] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.921 [2024-07-25 08:56:44.994489] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.921 [2024-07-25 08:56:44.994549] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.921 [2024-07-25 08:56:45.011977] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.921 [2024-07-25 08:56:45.012039] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.921 [2024-07-25 08:56:45.027668] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.921 [2024-07-25 08:56:45.027731] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.179 [2024-07-25 08:56:45.043693] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.179 [2024-07-25 08:56:45.043756] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.179 [2024-07-25 08:56:45.056660] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.179 [2024-07-25 08:56:45.056722] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.179 [2024-07-25 08:56:45.076481] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.179 [2024-07-25 08:56:45.076580] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.179 [2024-07-25 08:56:45.090512] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.179 [2024-07-25 08:56:45.090600] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.179 [2024-07-25 08:56:45.109602] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.179 [2024-07-25 08:56:45.109668] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.179 [2024-07-25 08:56:45.126246] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.179 [2024-07-25 08:56:45.126323] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.179 [2024-07-25 08:56:45.142574] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.179 [2024-07-25 08:56:45.142658] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.179 [2024-07-25 08:56:45.156035] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.179 [2024-07-25 08:56:45.156098] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.179 [2024-07-25 08:56:45.175597] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.179 [2024-07-25 08:56:45.175665] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.179 [2024-07-25 08:56:45.190199] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.179 [2024-07-25 08:56:45.190288] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.179 [2024-07-25 08:56:45.207757] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.179 [2024-07-25 08:56:45.207845] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.179 [2024-07-25 08:56:45.224433] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.179 [2024-07-25 08:56:45.224499] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.179 [2024-07-25 08:56:45.241643] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.179 [2024-07-25 08:56:45.241698] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.179 [2024-07-25 08:56:45.256912] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.179 [2024-07-25 08:56:45.256984] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.179 [2024-07-25 08:56:45.269514] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.179 [2024-07-25 08:56:45.269583] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.179 [2024-07-25 08:56:45.289186] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.179 [2024-07-25 08:56:45.289283] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.437 [2024-07-25 08:56:45.305409] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.437 [2024-07-25 08:56:45.305497] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.437 [2024-07-25 08:56:45.318457] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.437 [2024-07-25 08:56:45.318523] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.437 [2024-07-25 08:56:45.337280] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.437 [2024-07-25 08:56:45.337343] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.437 [2024-07-25 08:56:45.353669] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.437 [2024-07-25 08:56:45.353737] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.437 [2024-07-25 08:56:45.369625] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.437 [2024-07-25 08:56:45.369719] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.437 [2024-07-25 08:56:45.382703] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.437 [2024-07-25 08:56:45.382777] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.437 [2024-07-25 08:56:45.401923] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.437 [2024-07-25 08:56:45.401987] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.437 [2024-07-25 08:56:45.418185] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.437 [2024-07-25 08:56:45.418247] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.437 [2024-07-25 08:56:45.430729] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.437 [2024-07-25 08:56:45.430793] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.437 [2024-07-25 08:56:45.449996] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.437 [2024-07-25 08:56:45.450059] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.437 [2024-07-25 08:56:45.465332] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.437 [2024-07-25 08:56:45.465412] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.437 [2024-07-25 08:56:45.483235] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.437 [2024-07-25 08:56:45.483308] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.437 [2024-07-25 08:56:45.500586] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.437 [2024-07-25 08:56:45.500673] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.437 [2024-07-25 08:56:45.513776] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.437 [2024-07-25 08:56:45.513858] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.437 [2024-07-25 08:56:45.533055] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.437 [2024-07-25 08:56:45.533146] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.437 [2024-07-25 08:56:45.549987] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.437 [2024-07-25 08:56:45.550089] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.695 [2024-07-25 08:56:45.563466] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.695 [2024-07-25 08:56:45.563546] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.695 [2024-07-25 08:56:45.581322] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.695 [2024-07-25 08:56:45.581410] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.695 [2024-07-25 08:56:45.598361] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.695 [2024-07-25 08:56:45.598464] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.695 [2024-07-25 08:56:45.614540] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.695 [2024-07-25 08:56:45.614624] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.695 [2024-07-25 08:56:45.627476] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.695 [2024-07-25 08:56:45.627554] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.695 [2024-07-25 08:56:45.646326] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.695 [2024-07-25 08:56:45.646432] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.695 [2024-07-25 08:56:45.661090] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.695 [2024-07-25 08:56:45.661186] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.695 [2024-07-25 08:56:45.678907] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.695 [2024-07-25 08:56:45.678991] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.695 [2024-07-25 08:56:45.696060] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.695 [2024-07-25 08:56:45.696143] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.695 [2024-07-25 08:56:45.712247] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.695 [2024-07-25 08:56:45.712347] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.695 [2024-07-25 08:56:45.725671] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.695 [2024-07-25 08:56:45.725744] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.695 [2024-07-25 08:56:45.744775] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.695 [2024-07-25 08:56:45.744846] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.695 [2024-07-25 08:56:45.761511] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.695 [2024-07-25 08:56:45.761580] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.695 [2024-07-25 08:56:45.774216] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.695 [2024-07-25 08:56:45.774292] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.695 [2024-07-25 08:56:45.794386] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.695 [2024-07-25 08:56:45.794479] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.952 [2024-07-25 08:56:45.809385] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.952 [2024-07-25 08:56:45.809469] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.952 [2024-07-25 08:56:45.825144] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.952 [2024-07-25 08:56:45.825230] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.952 [2024-07-25 08:56:45.843443] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.952 [2024-07-25 08:56:45.843526] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.952 [2024-07-25 08:56:45.860108] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.952 [2024-07-25 08:56:45.860206] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.952 [2024-07-25 08:56:45.873099] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.952 [2024-07-25 08:56:45.873191] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.952 [2024-07-25 08:56:45.891971] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.952 [2024-07-25 08:56:45.892046] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.952 [2024-07-25 08:56:45.909454] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.952 [2024-07-25 08:56:45.909531] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.952 [2024-07-25 08:56:45.922434] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.952 [2024-07-25 08:56:45.922503] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.952 [2024-07-25 08:56:45.941718] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.952 [2024-07-25 08:56:45.941800] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.952 [2024-07-25 08:56:45.956616] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.952 [2024-07-25 08:56:45.956689] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.952 [2024-07-25 08:56:45.972338] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.952 [2024-07-25 08:56:45.972408] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.952 [2024-07-25 08:56:45.990360] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.952 [2024-07-25 08:56:45.990452] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.952 [2024-07-25 08:56:46.007206] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.952 [2024-07-25 08:56:46.007308] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.952 [2024-07-25 08:56:46.020396] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.952 [2024-07-25 08:56:46.020463] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.952 [2024-07-25 08:56:46.039767] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.952 [2024-07-25 08:56:46.039871] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.952 [2024-07-25 08:56:46.057012] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.952 [2024-07-25 08:56:46.057097] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.210 [2024-07-25 08:56:46.069745] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.210 [2024-07-25 08:56:46.069852] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.210 [2024-07-25 08:56:46.088941] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.210 [2024-07-25 08:56:46.089033] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.210 [2024-07-25 08:56:46.103513] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.210 [2024-07-25 08:56:46.103590] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.210 [2024-07-25 08:56:46.118597] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.210 [2024-07-25 08:56:46.118680] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.210 [2024-07-25 08:56:46.133410] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.210 [2024-07-25 08:56:46.133494] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.210 [2024-07-25 08:56:46.150635] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.210 [2024-07-25 08:56:46.150725] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.210 [2024-07-25 08:56:46.164648] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.210 [2024-07-25 08:56:46.164721] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.210 [2024-07-25 08:56:46.182821] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.210 [2024-07-25 08:56:46.182952] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.210 [2024-07-25 08:56:46.199390] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.210 [2024-07-25 08:56:46.199487] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.210 [2024-07-25 08:56:46.213061] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.210 [2024-07-25 08:56:46.213121] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.210 [2024-07-25 08:56:46.232761] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.210 [2024-07-25 08:56:46.232861] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.210 [2024-07-25 08:56:46.247797] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.210 [2024-07-25 08:56:46.247900] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.210 [2024-07-25 08:56:46.264035] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.210 [2024-07-25 08:56:46.264109] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.210 [2024-07-25 08:56:46.283194] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.210 [2024-07-25 08:56:46.283279] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.210 [2024-07-25 08:56:46.297694] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.210 [2024-07-25 08:56:46.297768] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.210 [2024-07-25 08:56:46.313426] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.210 [2024-07-25 08:56:46.313499] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.468 [2024-07-25 08:56:46.331080] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.468 [2024-07-25 08:56:46.331147] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.468 [2024-07-25 08:56:46.347337] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.468 [2024-07-25 08:56:46.347402] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.468 [2024-07-25 08:56:46.360478] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.468 [2024-07-25 08:56:46.360573] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.468 [2024-07-25 08:56:46.379987] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.468 [2024-07-25 08:56:46.380059] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.468 [2024-07-25 08:56:46.397310] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.468 [2024-07-25 08:56:46.397379] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.468 [2024-07-25 08:56:46.410389] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.468 [2024-07-25 08:56:46.410448] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.468 [2024-07-25 08:56:46.428984] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.468 [2024-07-25 08:56:46.429054] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.468 [2024-07-25 08:56:46.442581] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.468 [2024-07-25 08:56:46.442643] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.468 [2024-07-25 08:56:46.461666] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.468 [2024-07-25 08:56:46.461731] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.468 [2024-07-25 08:56:46.476104] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.468 [2024-07-25 08:56:46.476165] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.468 [2024-07-25 08:56:46.491889] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.468 [2024-07-25 08:56:46.491955] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.468 [2024-07-25 08:56:46.510091] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.468 [2024-07-25 08:56:46.510162] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.468 [2024-07-25 08:56:46.523726] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.468 [2024-07-25 08:56:46.523788] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.468 [2024-07-25 08:56:46.542607] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.468 [2024-07-25 08:56:46.542677] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.468 [2024-07-25 08:56:46.559793] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.468 [2024-07-25 08:56:46.559892] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.468 [2024-07-25 08:56:46.573169] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.468 [2024-07-25 08:56:46.573234] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.726 [2024-07-25 08:56:46.592181] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.726 [2024-07-25 08:56:46.592285] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.726 [2024-07-25 08:56:46.609337] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.726 [2024-07-25 08:56:46.609415] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.726 [2024-07-25 08:56:46.622515] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.726 [2024-07-25 08:56:46.622572] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.726 [2024-07-25 08:56:46.641628] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.726 [2024-07-25 08:56:46.641708] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.726 [2024-07-25 08:56:46.656139] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.726 [2024-07-25 08:56:46.656213] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.726 [2024-07-25 08:56:46.673998] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.726 [2024-07-25 08:56:46.674065] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.726 [2024-07-25 08:56:46.691433] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.726 [2024-07-25 08:56:46.691502] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.726 [2024-07-25 08:56:46.704921] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.726 [2024-07-25 08:56:46.704985] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.726 [2024-07-25 08:56:46.724747] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.726 [2024-07-25 08:56:46.724837] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.726 [2024-07-25 08:56:46.742364] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.726 [2024-07-25 08:56:46.742428] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.727 [2024-07-25 08:56:46.755706] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.727 [2024-07-25 08:56:46.755770] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.727 [2024-07-25 08:56:46.774667] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.727 [2024-07-25 08:56:46.774740] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.727 [2024-07-25 08:56:46.789110] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.727 [2024-07-25 08:56:46.789191] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.727 [2024-07-25 08:56:46.804485] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.727 [2024-07-25 08:56:46.804576] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.727 [2024-07-25 08:56:46.823304] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.727 [2024-07-25 08:56:46.823372] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.727 [2024-07-25 08:56:46.838411] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.727 [2024-07-25 08:56:46.838472] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.985 [2024-07-25 08:56:46.851771] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.985 [2024-07-25 08:56:46.851891] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.985 00:14:39.985 Latency(us) 00:14:39.985 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:39.985 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:14:39.985 Nvme1n1 : 5.01 8573.37 66.98 0.00 0.00 14905.56 3783.21 23950.43 00:14:39.985 =================================================================================================================== 00:14:39.985 Total : 8573.37 66.98 0.00 0.00 14905.56 3783.21 23950.43 00:14:39.985 [2024-07-25 08:56:46.861629] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.985 [2024-07-25 08:56:46.861678] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.985 [2024-07-25 08:56:46.873621] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.985 [2024-07-25 08:56:46.873675] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.985 [2024-07-25 08:56:46.885569] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.985 [2024-07-25 08:56:46.885611] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.985 [2024-07-25 08:56:46.897625] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.985 [2024-07-25 08:56:46.897677] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.985 [2024-07-25 08:56:46.909641] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.985 [2024-07-25 08:56:46.909693] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.985 [2024-07-25 08:56:46.921606] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.985 [2024-07-25 08:56:46.921649] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.985 [2024-07-25 08:56:46.933634] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.985 [2024-07-25 08:56:46.933678] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.985 [2024-07-25 08:56:46.945592] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.985 [2024-07-25 08:56:46.945634] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.985 [2024-07-25 08:56:46.957626] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.985 [2024-07-25 08:56:46.957680] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.985 [2024-07-25 08:56:46.969627] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.985 [2024-07-25 08:56:46.969670] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.985 [2024-07-25 08:56:46.981662] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.985 [2024-07-25 08:56:46.981717] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.985 [2024-07-25 08:56:46.993669] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.985 [2024-07-25 08:56:46.993717] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.985 [2024-07-25 08:56:47.005647] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.985 [2024-07-25 08:56:47.005688] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.985 [2024-07-25 08:56:47.017615] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.985 [2024-07-25 08:56:47.017659] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.985 [2024-07-25 08:56:47.029664] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.985 [2024-07-25 08:56:47.029704] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.985 [2024-07-25 08:56:47.041636] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.985 [2024-07-25 08:56:47.041679] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.985 [2024-07-25 08:56:47.053652] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.985 [2024-07-25 08:56:47.053707] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.985 [2024-07-25 08:56:47.065684] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.985 [2024-07-25 08:56:47.065725] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.985 [2024-07-25 08:56:47.077671] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.985 [2024-07-25 08:56:47.077713] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.985 [2024-07-25 08:56:47.089675] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.985 [2024-07-25 08:56:47.089714] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.244 [2024-07-25 08:56:47.101661] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.244 [2024-07-25 08:56:47.101701] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.244 [2024-07-25 08:56:47.113667] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.244 [2024-07-25 08:56:47.113708] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.244 [2024-07-25 08:56:47.125707] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.244 [2024-07-25 08:56:47.125757] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.244 [2024-07-25 08:56:47.137672] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.244 [2024-07-25 08:56:47.137715] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.244 [2024-07-25 08:56:47.149687] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.244 [2024-07-25 08:56:47.149729] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.244 [2024-07-25 08:56:47.161701] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.244 [2024-07-25 08:56:47.161742] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.244 [2024-07-25 08:56:47.173671] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.244 [2024-07-25 08:56:47.173711] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.244 [2024-07-25 08:56:47.185797] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.244 [2024-07-25 08:56:47.185910] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.244 [2024-07-25 08:56:47.197791] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.244 [2024-07-25 08:56:47.197905] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.244 [2024-07-25 08:56:47.209802] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.244 [2024-07-25 08:56:47.209926] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.244 [2024-07-25 08:56:47.221787] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.244 [2024-07-25 08:56:47.221890] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.244 [2024-07-25 08:56:47.233748] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.244 [2024-07-25 08:56:47.233803] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.244 [2024-07-25 08:56:47.245797] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.244 [2024-07-25 08:56:47.245921] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.244 [2024-07-25 08:56:47.257770] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.244 [2024-07-25 08:56:47.257872] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.244 [2024-07-25 08:56:47.269748] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.244 [2024-07-25 08:56:47.269802] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.244 [2024-07-25 08:56:47.281801] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.244 [2024-07-25 08:56:47.281903] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.244 [2024-07-25 08:56:47.293762] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.244 [2024-07-25 08:56:47.293808] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.244 [2024-07-25 08:56:47.305736] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.244 [2024-07-25 08:56:47.305785] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.244 [2024-07-25 08:56:47.317824] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.244 [2024-07-25 08:56:47.317927] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.244 [2024-07-25 08:56:47.329780] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.244 [2024-07-25 08:56:47.329894] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.244 [2024-07-25 08:56:47.341753] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.244 [2024-07-25 08:56:47.341791] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.244 [2024-07-25 08:56:47.353753] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.244 [2024-07-25 08:56:47.353793] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.503 [2024-07-25 08:56:47.365786] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.503 [2024-07-25 08:56:47.365842] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.503 [2024-07-25 08:56:47.377789] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.503 [2024-07-25 08:56:47.379584] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.503 [2024-07-25 08:56:47.389796] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.503 [2024-07-25 08:56:47.389986] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.503 [2024-07-25 08:56:47.401761] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.503 [2024-07-25 08:56:47.401851] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.503 [2024-07-25 08:56:47.413779] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.503 [2024-07-25 08:56:47.413832] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.503 [2024-07-25 08:56:47.425769] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.503 [2024-07-25 08:56:47.425849] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.503 [2024-07-25 08:56:47.437786] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.503 [2024-07-25 08:56:47.437857] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.503 [2024-07-25 08:56:47.449791] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.503 [2024-07-25 08:56:47.449862] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.503 [2024-07-25 08:56:47.461790] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.503 [2024-07-25 08:56:47.461885] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.503 [2024-07-25 08:56:47.473817] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.503 [2024-07-25 08:56:47.473890] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.503 [2024-07-25 08:56:47.485827] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.503 [2024-07-25 08:56:47.485862] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.503 [2024-07-25 08:56:47.497791] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.503 [2024-07-25 08:56:47.497842] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.503 [2024-07-25 08:56:47.509889] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.503 [2024-07-25 08:56:47.509939] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.503 [2024-07-25 08:56:47.521840] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.503 [2024-07-25 08:56:47.521909] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.503 [2024-07-25 08:56:47.533808] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.503 [2024-07-25 08:56:47.533894] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.503 [2024-07-25 08:56:47.545815] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.503 [2024-07-25 08:56:47.545901] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.503 [2024-07-25 08:56:47.557810] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.503 [2024-07-25 08:56:47.557900] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.503 [2024-07-25 08:56:47.569846] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.503 [2024-07-25 08:56:47.569919] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.503 [2024-07-25 08:56:47.581955] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.503 [2024-07-25 08:56:47.582016] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.503 [2024-07-25 08:56:47.593810] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.503 [2024-07-25 08:56:47.593891] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.503 [2024-07-25 08:56:47.605870] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.503 [2024-07-25 08:56:47.605909] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.761 [2024-07-25 08:56:47.617846] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.761 [2024-07-25 08:56:47.617930] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.761 [2024-07-25 08:56:47.637926] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.761 [2024-07-25 08:56:47.637972] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.761 [2024-07-25 08:56:47.649910] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.761 [2024-07-25 08:56:47.649951] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.761 [2024-07-25 08:56:47.661945] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.761 [2024-07-25 08:56:47.661987] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.761 [2024-07-25 08:56:47.673906] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.761 [2024-07-25 08:56:47.673949] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.761 [2024-07-25 08:56:47.685902] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.761 [2024-07-25 08:56:47.685969] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.761 [2024-07-25 08:56:47.697901] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.761 [2024-07-25 08:56:47.697938] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.761 [2024-07-25 08:56:47.709962] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.761 [2024-07-25 08:56:47.710198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.762 [2024-07-25 08:56:47.721907] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.762 [2024-07-25 08:56:47.722132] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.762 [2024-07-25 08:56:47.733970] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.762 [2024-07-25 08:56:47.734103] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.762 [2024-07-25 08:56:47.745950] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.762 [2024-07-25 08:56:47.746083] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.762 [2024-07-25 08:56:47.757936] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.762 [2024-07-25 08:56:47.758082] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.762 [2024-07-25 08:56:47.769946] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.762 [2024-07-25 08:56:47.770088] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.762 [2024-07-25 08:56:47.781956] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.762 [2024-07-25 08:56:47.782091] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.762 [2024-07-25 08:56:47.793976] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.762 [2024-07-25 08:56:47.794148] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.762 [2024-07-25 08:56:47.806051] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.762 [2024-07-25 08:56:47.806182] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.762 [2024-07-25 08:56:47.817984] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.762 [2024-07-25 08:56:47.818103] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.762 [2024-07-25 08:56:47.830000] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.762 [2024-07-25 08:56:47.830058] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.762 [2024-07-25 08:56:47.842030] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.762 [2024-07-25 08:56:47.842088] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.762 [2024-07-25 08:56:47.853990] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.762 [2024-07-25 08:56:47.854050] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.762 [2024-07-25 08:56:47.866008] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.762 [2024-07-25 08:56:47.866067] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.021 [2024-07-25 08:56:47.878027] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.021 [2024-07-25 08:56:47.878088] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.021 [2024-07-25 08:56:47.890048] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.021 [2024-07-25 08:56:47.890106] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.021 [2024-07-25 08:56:47.902078] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.021 [2024-07-25 08:56:47.902150] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.021 [2024-07-25 08:56:47.914013] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.021 [2024-07-25 08:56:47.914069] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.021 [2024-07-25 08:56:47.926026] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.021 [2024-07-25 08:56:47.926085] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.021 [2024-07-25 08:56:47.938043] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.021 [2024-07-25 08:56:47.938100] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.021 [2024-07-25 08:56:47.950109] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.021 [2024-07-25 08:56:47.950165] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.021 [2024-07-25 08:56:47.962054] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.021 [2024-07-25 08:56:47.962111] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.021 [2024-07-25 08:56:47.974055] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.021 [2024-07-25 08:56:47.974111] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.021 [2024-07-25 08:56:47.986059] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.021 [2024-07-25 08:56:47.986101] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.021 [2024-07-25 08:56:47.998084] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.021 [2024-07-25 08:56:47.998126] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.021 [2024-07-25 08:56:48.010049] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.021 [2024-07-25 08:56:48.010105] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.021 [2024-07-25 08:56:48.022075] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.021 [2024-07-25 08:56:48.022132] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.021 [2024-07-25 08:56:48.034075] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.021 [2024-07-25 08:56:48.034133] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.021 [2024-07-25 08:56:48.046097] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.021 [2024-07-25 08:56:48.046154] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.021 [2024-07-25 08:56:48.058095] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.021 [2024-07-25 08:56:48.058152] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.021 [2024-07-25 08:56:48.070126] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.021 [2024-07-25 08:56:48.070211] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.021 [2024-07-25 08:56:48.082100] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.021 [2024-07-25 08:56:48.082142] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.021 [2024-07-25 08:56:48.094140] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.021 [2024-07-25 08:56:48.094215] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.021 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (69948) - No such process 00:14:41.021 08:56:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 69948 00:14:41.021 08:56:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:41.021 08:56:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.021 08:56:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:41.021 08:56:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.021 08:56:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:41.021 08:56:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.021 08:56:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:41.021 delay0 00:14:41.021 08:56:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.021 08:56:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:14:41.021 08:56:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.021 08:56:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:41.021 08:56:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.021 08:56:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:14:41.279 [2024-07-25 08:56:48.345645] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:14:47.831 Initializing NVMe Controllers 00:14:47.831 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:47.831 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:47.831 Initialization complete. Launching workers. 00:14:47.831 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 73 00:14:47.831 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 360, failed to submit 33 00:14:47.831 success 231, unsuccess 129, failed 0 00:14:47.831 08:56:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:14:47.831 08:56:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:14:47.831 08:56:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:47.831 08:56:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:14:47.831 08:56:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:47.831 08:56:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:14:47.831 08:56:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:47.831 08:56:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:47.831 rmmod nvme_tcp 00:14:47.831 rmmod nvme_fabrics 00:14:47.831 rmmod nvme_keyring 00:14:47.831 08:56:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:47.831 08:56:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:14:47.831 08:56:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:14:47.831 08:56:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 69781 ']' 00:14:47.831 08:56:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 69781 00:14:47.831 08:56:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 69781 ']' 00:14:47.831 08:56:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 69781 00:14:47.831 08:56:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:14:47.831 08:56:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:47.831 08:56:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69781 00:14:47.831 08:56:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:47.831 killing process with pid 69781 00:14:47.831 08:56:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:47.831 08:56:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69781' 00:14:47.831 08:56:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 69781 00:14:47.831 08:56:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 69781 00:14:48.764 08:56:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:48.764 08:56:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:48.764 08:56:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:48.764 08:56:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:48.764 08:56:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:48.764 08:56:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:48.764 08:56:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:48.764 08:56:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:49.023 08:56:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:49.023 00:14:49.023 real 0m28.891s 00:14:49.023 user 0m48.073s 00:14:49.023 sys 0m6.999s 00:14:49.023 08:56:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:49.023 08:56:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:49.023 ************************************ 00:14:49.024 END TEST nvmf_zcopy 00:14:49.024 ************************************ 00:14:49.024 08:56:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:14:49.024 08:56:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:49.024 08:56:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:49.024 08:56:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:49.024 ************************************ 00:14:49.024 START TEST nvmf_nmic 00:14:49.024 ************************************ 00:14:49.024 08:56:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:14:49.024 * Looking for test storage... 00:14:49.024 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:49.024 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:49.024 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:14:49.024 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:49.024 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:49.024 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:49.024 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:49.024 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:49.024 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:49.024 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:49.024 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:49.024 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:49.024 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:49.024 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:14:49.024 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:14:49.024 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:49.024 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:49.024 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:49.024 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:49.024 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:49.024 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:49.024 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:49.024 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:49.024 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.024 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.024 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.024 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:14:49.024 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.024 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:14:49.024 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:49.024 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:49.024 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:49.024 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:49.024 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:49.024 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:49.024 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:49.024 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:49.024 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:49.024 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:49.024 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:14:49.024 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:49.024 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:49.024 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:49.024 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:49.024 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:49.024 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:49.024 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:49.024 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:49.024 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:49.024 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:49.024 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:49.024 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:49.024 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:49.024 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:49.024 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:49.024 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:49.024 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:49.024 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:49.024 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:49.024 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:49.024 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:49.024 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:49.024 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:49.024 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:49.024 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:49.024 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:49.024 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:49.024 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:49.024 Cannot find device "nvmf_tgt_br" 00:14:49.024 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # true 00:14:49.024 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:49.024 Cannot find device "nvmf_tgt_br2" 00:14:49.024 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # true 00:14:49.024 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:49.024 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:49.024 Cannot find device "nvmf_tgt_br" 00:14:49.024 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # true 00:14:49.024 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:49.283 Cannot find device "nvmf_tgt_br2" 00:14:49.283 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # true 00:14:49.283 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:49.283 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:49.283 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:49.283 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:49.283 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:14:49.283 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:49.283 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:49.283 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:14:49.283 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:49.283 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:49.283 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:49.283 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:49.283 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:49.283 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:49.283 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:49.283 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:49.283 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:49.283 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:49.283 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:49.283 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:49.283 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:49.283 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:49.283 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:49.283 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:49.283 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:49.283 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:49.283 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:49.283 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:49.283 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:49.541 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:49.541 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:49.541 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:49.541 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:49.541 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:14:49.541 00:14:49.542 --- 10.0.0.2 ping statistics --- 00:14:49.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:49.542 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:14:49.542 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:49.542 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:49.542 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:14:49.542 00:14:49.542 --- 10.0.0.3 ping statistics --- 00:14:49.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:49.542 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:14:49.542 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:49.542 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:49.542 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:14:49.542 00:14:49.542 --- 10.0.0.1 ping statistics --- 00:14:49.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:49.542 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:14:49.542 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:49.542 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@433 -- # return 0 00:14:49.542 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:49.542 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:49.542 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:49.542 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:49.542 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:49.542 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:49.542 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:49.542 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:14:49.542 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:49.542 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:49.542 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:49.542 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=70301 00:14:49.542 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 70301 00:14:49.542 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 70301 ']' 00:14:49.542 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:49.542 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:49.542 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:49.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:49.542 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:49.542 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:49.542 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:49.542 [2024-07-25 08:56:56.559605] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:49.542 [2024-07-25 08:56:56.559797] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:49.800 [2024-07-25 08:56:56.738726] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:50.058 [2024-07-25 08:56:56.989052] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:50.058 [2024-07-25 08:56:56.989165] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:50.058 [2024-07-25 08:56:56.989184] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:50.058 [2024-07-25 08:56:56.989201] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:50.058 [2024-07-25 08:56:56.989216] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:50.058 [2024-07-25 08:56:56.990081] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:50.058 [2024-07-25 08:56:56.990176] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:50.058 [2024-07-25 08:56:56.990251] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:50.058 [2024-07-25 08:56:56.990269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:50.317 [2024-07-25 08:56:57.194974] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:50.575 08:56:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:50.575 08:56:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:14:50.575 08:56:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:50.575 08:56:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:50.575 08:56:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:50.575 08:56:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:50.575 08:56:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:50.575 08:56:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.575 08:56:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:50.575 [2024-07-25 08:56:57.561044] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:50.575 08:56:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.575 08:56:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:50.575 08:56:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.575 08:56:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:50.575 Malloc0 00:14:50.575 08:56:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.575 08:56:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:50.575 08:56:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.575 08:56:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:50.575 08:56:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.575 08:56:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:50.575 08:56:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.575 08:56:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:50.575 08:56:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.575 08:56:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:50.575 08:56:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.575 08:56:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:50.575 [2024-07-25 08:56:57.665832] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:50.575 08:56:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.575 test case1: single bdev can't be used in multiple subsystems 00:14:50.575 08:56:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:14:50.575 08:56:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:14:50.575 08:56:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.575 08:56:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:50.575 08:56:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.575 08:56:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:14:50.575 08:56:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.575 08:56:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:50.576 08:56:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.576 08:56:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:14:50.576 08:56:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:14:50.576 08:56:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.576 08:56:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:50.833 [2024-07-25 08:56:57.689541] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:14:50.833 [2024-07-25 08:56:57.689610] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:14:50.833 [2024-07-25 08:56:57.689630] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.833 request: 00:14:50.833 { 00:14:50.833 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:14:50.833 "namespace": { 00:14:50.833 "bdev_name": "Malloc0", 00:14:50.833 "no_auto_visible": false 00:14:50.833 }, 00:14:50.833 "method": "nvmf_subsystem_add_ns", 00:14:50.833 "req_id": 1 00:14:50.833 } 00:14:50.833 Got JSON-RPC error response 00:14:50.833 response: 00:14:50.833 { 00:14:50.833 "code": -32602, 00:14:50.833 "message": "Invalid parameters" 00:14:50.833 } 00:14:50.833 08:56:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:50.833 08:56:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:14:50.833 08:56:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:14:50.833 Adding namespace failed - expected result. 00:14:50.833 08:56:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:14:50.833 test case2: host connect to nvmf target in multiple paths 00:14:50.833 08:56:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:14:50.833 08:56:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:14:50.833 08:56:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.833 08:56:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:50.833 [2024-07-25 08:56:57.701762] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:14:50.833 08:56:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.833 08:56:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid=a4705431-95c9-4bc1-9185-4a8233d2d7f5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:50.834 08:56:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid=a4705431-95c9-4bc1-9185-4a8233d2d7f5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:14:51.091 08:56:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:14:51.091 08:56:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:14:51.091 08:56:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:51.091 08:56:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:51.091 08:56:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:14:52.991 08:56:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:52.991 08:56:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:52.991 08:56:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:52.991 08:56:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:52.991 08:56:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:52.991 08:56:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:14:52.991 08:56:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:14:52.991 [global] 00:14:52.991 thread=1 00:14:52.991 invalidate=1 00:14:52.991 rw=write 00:14:52.991 time_based=1 00:14:52.991 runtime=1 00:14:52.991 ioengine=libaio 00:14:52.991 direct=1 00:14:52.991 bs=4096 00:14:52.991 iodepth=1 00:14:52.991 norandommap=0 00:14:52.991 numjobs=1 00:14:52.991 00:14:52.991 verify_dump=1 00:14:52.991 verify_backlog=512 00:14:52.991 verify_state_save=0 00:14:52.991 do_verify=1 00:14:52.991 verify=crc32c-intel 00:14:52.991 [job0] 00:14:52.991 filename=/dev/nvme0n1 00:14:52.991 Could not set queue depth (nvme0n1) 00:14:53.249 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:53.249 fio-3.35 00:14:53.249 Starting 1 thread 00:14:54.183 00:14:54.183 job0: (groupid=0, jobs=1): err= 0: pid=70394: Thu Jul 25 08:57:01 2024 00:14:54.183 read: IOPS=2515, BW=9.83MiB/s (10.3MB/s)(9.84MiB/1001msec) 00:14:54.183 slat (nsec): min=12285, max=39376, avg=14707.74, stdev=2067.93 00:14:54.183 clat (usec): min=183, max=412, avg=219.88, stdev=18.20 00:14:54.183 lat (usec): min=199, max=425, avg=234.59, stdev=18.21 00:14:54.183 clat percentiles (usec): 00:14:54.183 | 1.00th=[ 194], 5.00th=[ 202], 10.00th=[ 204], 20.00th=[ 208], 00:14:54.183 | 30.00th=[ 212], 40.00th=[ 215], 50.00th=[ 219], 60.00th=[ 221], 00:14:54.183 | 70.00th=[ 225], 80.00th=[ 229], 90.00th=[ 235], 95.00th=[ 243], 00:14:54.183 | 99.00th=[ 289], 99.50th=[ 351], 99.90th=[ 383], 99.95th=[ 404], 00:14:54.183 | 99.99th=[ 412] 00:14:54.183 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:14:54.183 slat (usec): min=17, max=122, avg=21.46, stdev= 4.16 00:14:54.183 clat (usec): min=113, max=320, avg=134.98, stdev=10.22 00:14:54.183 lat (usec): min=136, max=442, avg=156.45, stdev=11.63 00:14:54.183 clat percentiles (usec): 00:14:54.183 | 1.00th=[ 118], 5.00th=[ 121], 10.00th=[ 124], 20.00th=[ 128], 00:14:54.183 | 30.00th=[ 130], 40.00th=[ 133], 50.00th=[ 135], 60.00th=[ 137], 00:14:54.183 | 70.00th=[ 139], 80.00th=[ 143], 90.00th=[ 147], 95.00th=[ 153], 00:14:54.183 | 99.00th=[ 163], 99.50th=[ 167], 99.90th=[ 186], 99.95th=[ 190], 00:14:54.183 | 99.99th=[ 322] 00:14:54.183 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:14:54.183 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:14:54.183 lat (usec) : 250=98.56%, 500=1.44% 00:14:54.183 cpu : usr=1.60%, sys=7.40%, ctx=5078, majf=0, minf=2 00:14:54.184 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:54.184 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:54.184 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:54.184 issued rwts: total=2518,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:54.184 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:54.184 00:14:54.184 Run status group 0 (all jobs): 00:14:54.184 READ: bw=9.83MiB/s (10.3MB/s), 9.83MiB/s-9.83MiB/s (10.3MB/s-10.3MB/s), io=9.84MiB (10.3MB), run=1001-1001msec 00:14:54.184 WRITE: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:14:54.184 00:14:54.184 Disk stats (read/write): 00:14:54.184 nvme0n1: ios=2143/2560, merge=0/0, ticks=484/366, in_queue=850, util=91.68% 00:14:54.184 08:57:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:54.442 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:14:54.442 08:57:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:54.442 08:57:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:14:54.442 08:57:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:54.442 08:57:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:54.442 08:57:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:54.442 08:57:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:54.442 08:57:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:14:54.442 08:57:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:14:54.442 08:57:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:14:54.442 08:57:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:54.442 08:57:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:14:54.442 08:57:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:54.442 08:57:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:14:54.442 08:57:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:54.442 08:57:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:54.442 rmmod nvme_tcp 00:14:54.442 rmmod nvme_fabrics 00:14:54.442 rmmod nvme_keyring 00:14:54.442 08:57:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:54.442 08:57:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:14:54.442 08:57:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:14:54.442 08:57:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 70301 ']' 00:14:54.442 08:57:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 70301 00:14:54.442 08:57:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 70301 ']' 00:14:54.442 08:57:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 70301 00:14:54.442 08:57:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:14:54.442 08:57:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:54.442 08:57:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70301 00:14:54.442 08:57:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:54.442 killing process with pid 70301 00:14:54.442 08:57:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:54.442 08:57:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70301' 00:14:54.442 08:57:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 70301 00:14:54.442 08:57:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 70301 00:14:55.815 08:57:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:55.815 08:57:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:55.815 08:57:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:55.815 08:57:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:55.815 08:57:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:55.815 08:57:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:55.815 08:57:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:55.816 08:57:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:55.816 08:57:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:55.816 00:14:55.816 real 0m6.919s 00:14:55.816 user 0m20.859s 00:14:55.816 sys 0m2.296s 00:14:55.816 08:57:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:55.816 08:57:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:55.816 ************************************ 00:14:55.816 END TEST nvmf_nmic 00:14:55.816 ************************************ 00:14:55.816 08:57:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:14:55.816 08:57:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:55.816 08:57:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:55.816 08:57:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:55.816 ************************************ 00:14:55.816 START TEST nvmf_fio_target 00:14:55.816 ************************************ 00:14:55.816 08:57:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:14:56.074 * Looking for test storage... 00:14:56.074 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:56.074 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:56.074 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:14:56.074 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:56.074 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:56.074 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:56.074 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:56.074 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:56.074 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:56.074 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:56.074 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:56.074 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:56.074 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:56.074 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:14:56.074 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:14:56.074 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:56.075 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:56.075 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:56.075 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:56.075 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:56.075 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:56.075 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:56.075 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:56.075 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.075 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.075 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.075 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:14:56.075 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.075 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:14:56.075 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:56.075 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:56.075 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:56.075 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:56.075 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:56.075 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:56.075 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:56.075 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:56.075 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:56.075 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:56.075 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:56.075 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:14:56.075 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:56.075 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:56.075 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:56.075 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:56.075 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:56.075 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:56.075 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:56.075 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:56.075 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:56.075 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:56.075 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:56.075 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:56.075 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:56.075 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:56.075 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:56.075 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:56.075 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:56.075 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:56.075 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:56.075 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:56.075 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:56.075 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:56.075 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:56.075 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:56.075 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:56.075 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:56.075 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:56.075 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:56.075 Cannot find device "nvmf_tgt_br" 00:14:56.075 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # true 00:14:56.075 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:56.075 Cannot find device "nvmf_tgt_br2" 00:14:56.075 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # true 00:14:56.075 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:56.075 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:56.075 Cannot find device "nvmf_tgt_br" 00:14:56.075 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # true 00:14:56.075 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:56.075 Cannot find device "nvmf_tgt_br2" 00:14:56.075 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # true 00:14:56.075 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:56.075 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:56.075 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:56.075 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:56.075 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:14:56.075 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:56.075 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:56.075 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:14:56.075 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:56.075 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:56.333 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:56.333 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:56.333 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:56.333 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:56.333 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:56.333 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:56.333 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:56.333 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:56.333 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:56.333 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:56.333 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:56.333 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:56.333 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:56.333 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:56.333 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:56.333 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:56.333 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:56.333 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:56.333 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:56.333 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:56.333 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:56.333 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:56.333 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:56.333 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:14:56.333 00:14:56.333 --- 10.0.0.2 ping statistics --- 00:14:56.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:56.333 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:14:56.333 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:56.333 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:56.333 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:14:56.333 00:14:56.333 --- 10.0.0.3 ping statistics --- 00:14:56.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:56.333 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:14:56.333 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:56.333 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:56.333 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:14:56.333 00:14:56.333 --- 10.0.0.1 ping statistics --- 00:14:56.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:56.333 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:14:56.333 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:56.333 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@433 -- # return 0 00:14:56.333 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:56.333 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:56.333 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:56.333 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:56.333 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:56.333 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:56.333 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:56.333 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:14:56.333 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:56.333 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:56.333 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.333 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=70584 00:14:56.333 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 70584 00:14:56.333 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 70584 ']' 00:14:56.333 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:56.333 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:56.333 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:56.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:56.333 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:56.333 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:56.333 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.591 [2024-07-25 08:57:03.552442] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:56.591 [2024-07-25 08:57:03.552631] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:56.849 [2024-07-25 08:57:03.732935] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:57.107 [2024-07-25 08:57:04.027681] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:57.107 [2024-07-25 08:57:04.027785] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:57.107 [2024-07-25 08:57:04.027846] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:57.107 [2024-07-25 08:57:04.027868] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:57.107 [2024-07-25 08:57:04.027887] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:57.108 [2024-07-25 08:57:04.028406] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:57.108 [2024-07-25 08:57:04.028613] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:57.108 [2024-07-25 08:57:04.029342] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:57.108 [2024-07-25 08:57:04.029378] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:57.366 [2024-07-25 08:57:04.245595] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:57.366 08:57:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:57.366 08:57:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:14:57.366 08:57:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:57.366 08:57:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:57.366 08:57:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.366 08:57:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:57.366 08:57:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:57.624 [2024-07-25 08:57:04.669023] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:57.624 08:57:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:58.191 08:57:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:14:58.191 08:57:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:58.449 08:57:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:14:58.449 08:57:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:58.707 08:57:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:14:58.707 08:57:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:58.966 08:57:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:14:58.966 08:57:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:14:59.224 08:57:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:59.791 08:57:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:14:59.791 08:57:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:00.049 08:57:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:15:00.049 08:57:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:00.306 08:57:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:15:00.306 08:57:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:15:00.563 08:57:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:00.821 08:57:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:15:00.821 08:57:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:01.078 08:57:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:15:01.078 08:57:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:01.336 08:57:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:01.594 [2024-07-25 08:57:08.564100] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:01.594 08:57:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:15:01.852 08:57:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:15:02.110 08:57:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid=a4705431-95c9-4bc1-9185-4a8233d2d7f5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:02.110 08:57:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:15:02.110 08:57:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:15:02.110 08:57:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:02.110 08:57:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:15:02.110 08:57:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:15:02.110 08:57:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:15:04.638 08:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:04.639 08:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:04.639 08:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:04.639 08:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:15:04.639 08:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:04.639 08:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:15:04.639 08:57:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:04.639 [global] 00:15:04.639 thread=1 00:15:04.639 invalidate=1 00:15:04.639 rw=write 00:15:04.639 time_based=1 00:15:04.639 runtime=1 00:15:04.639 ioengine=libaio 00:15:04.639 direct=1 00:15:04.639 bs=4096 00:15:04.639 iodepth=1 00:15:04.639 norandommap=0 00:15:04.639 numjobs=1 00:15:04.639 00:15:04.639 verify_dump=1 00:15:04.639 verify_backlog=512 00:15:04.639 verify_state_save=0 00:15:04.639 do_verify=1 00:15:04.639 verify=crc32c-intel 00:15:04.639 [job0] 00:15:04.639 filename=/dev/nvme0n1 00:15:04.639 [job1] 00:15:04.639 filename=/dev/nvme0n2 00:15:04.639 [job2] 00:15:04.639 filename=/dev/nvme0n3 00:15:04.639 [job3] 00:15:04.639 filename=/dev/nvme0n4 00:15:04.639 Could not set queue depth (nvme0n1) 00:15:04.639 Could not set queue depth (nvme0n2) 00:15:04.639 Could not set queue depth (nvme0n3) 00:15:04.639 Could not set queue depth (nvme0n4) 00:15:04.639 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:04.639 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:04.639 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:04.639 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:04.639 fio-3.35 00:15:04.639 Starting 4 threads 00:15:05.571 00:15:05.571 job0: (groupid=0, jobs=1): err= 0: pid=70775: Thu Jul 25 08:57:12 2024 00:15:05.571 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:15:05.571 slat (usec): min=11, max=118, avg=14.57, stdev= 2.99 00:15:05.571 clat (usec): min=117, max=338, avg=196.28, stdev=14.52 00:15:05.571 lat (usec): min=186, max=355, avg=210.85, stdev=14.62 00:15:05.571 clat percentiles (usec): 00:15:05.571 | 1.00th=[ 178], 5.00th=[ 182], 10.00th=[ 184], 20.00th=[ 188], 00:15:05.571 | 30.00th=[ 190], 40.00th=[ 192], 50.00th=[ 194], 60.00th=[ 196], 00:15:05.571 | 70.00th=[ 200], 80.00th=[ 204], 90.00th=[ 210], 95.00th=[ 217], 00:15:05.571 | 99.00th=[ 247], 99.50th=[ 293], 99.90th=[ 330], 99.95th=[ 338], 00:15:05.571 | 99.99th=[ 338] 00:15:05.571 write: IOPS=2795, BW=10.9MiB/s (11.4MB/s)(10.9MiB/1001msec); 0 zone resets 00:15:05.571 slat (nsec): min=15519, max=74411, avg=21189.68, stdev=4353.82 00:15:05.571 clat (usec): min=102, max=2245, avg=140.06, stdev=43.30 00:15:05.572 lat (usec): min=133, max=2263, avg=161.25, stdev=43.82 00:15:05.572 clat percentiles (usec): 00:15:05.572 | 1.00th=[ 121], 5.00th=[ 123], 10.00th=[ 125], 20.00th=[ 128], 00:15:05.572 | 30.00th=[ 131], 40.00th=[ 133], 50.00th=[ 137], 60.00th=[ 141], 00:15:05.572 | 70.00th=[ 145], 80.00th=[ 149], 90.00th=[ 157], 95.00th=[ 163], 00:15:05.572 | 99.00th=[ 180], 99.50th=[ 190], 99.90th=[ 408], 99.95th=[ 515], 00:15:05.572 | 99.99th=[ 2245] 00:15:05.572 bw ( KiB/s): min=12288, max=12288, per=36.48%, avg=12288.00, stdev= 0.00, samples=1 00:15:05.572 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:15:05.572 lat (usec) : 250=99.44%, 500=0.52%, 750=0.02% 00:15:05.572 lat (msec) : 4=0.02% 00:15:05.572 cpu : usr=2.20%, sys=7.40%, ctx=5361, majf=0, minf=1 00:15:05.572 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:05.572 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:05.572 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:05.572 issued rwts: total=2560,2798,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:05.572 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:05.572 job1: (groupid=0, jobs=1): err= 0: pid=70776: Thu Jul 25 08:57:12 2024 00:15:05.572 read: IOPS=1266, BW=5067KiB/s (5189kB/s)(5072KiB/1001msec) 00:15:05.572 slat (nsec): min=13864, max=60517, avg=24644.84, stdev=6583.35 00:15:05.572 clat (usec): min=203, max=615, avg=356.84, stdev=74.18 00:15:05.572 lat (usec): min=220, max=650, avg=381.48, stdev=78.48 00:15:05.572 clat percentiles (usec): 00:15:05.572 | 1.00th=[ 210], 5.00th=[ 217], 10.00th=[ 223], 20.00th=[ 343], 00:15:05.572 | 30.00th=[ 363], 40.00th=[ 371], 50.00th=[ 375], 60.00th=[ 379], 00:15:05.572 | 70.00th=[ 388], 80.00th=[ 392], 90.00th=[ 404], 95.00th=[ 437], 00:15:05.572 | 99.00th=[ 553], 99.50th=[ 570], 99.90th=[ 586], 99.95th=[ 619], 00:15:05.572 | 99.99th=[ 619] 00:15:05.572 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:15:05.572 slat (usec): min=18, max=101, avg=38.31, stdev= 9.77 00:15:05.572 clat (usec): min=137, max=751, avg=292.50, stdev=88.69 00:15:05.572 lat (usec): min=158, max=793, avg=330.81, stdev=93.12 00:15:05.572 clat percentiles (usec): 00:15:05.572 | 1.00th=[ 143], 5.00th=[ 157], 10.00th=[ 169], 20.00th=[ 253], 00:15:05.572 | 30.00th=[ 273], 40.00th=[ 281], 50.00th=[ 289], 60.00th=[ 293], 00:15:05.572 | 70.00th=[ 306], 80.00th=[ 322], 90.00th=[ 437], 95.00th=[ 478], 00:15:05.572 | 99.00th=[ 529], 99.50th=[ 603], 99.90th=[ 742], 99.95th=[ 750], 00:15:05.572 | 99.99th=[ 750] 00:15:05.572 bw ( KiB/s): min= 6616, max= 6616, per=19.64%, avg=6616.00, stdev= 0.00, samples=1 00:15:05.572 iops : min= 1654, max= 1654, avg=1654.00, stdev= 0.00, samples=1 00:15:05.572 lat (usec) : 250=19.22%, 500=77.82%, 750=2.92%, 1000=0.04% 00:15:05.572 cpu : usr=2.00%, sys=6.90%, ctx=2824, majf=0, minf=13 00:15:05.572 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:05.572 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:05.572 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:05.572 issued rwts: total=1268,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:05.572 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:05.572 job2: (groupid=0, jobs=1): err= 0: pid=70777: Thu Jul 25 08:57:12 2024 00:15:05.572 read: IOPS=1246, BW=4987KiB/s (5107kB/s)(4992KiB/1001msec) 00:15:05.572 slat (nsec): min=12262, max=69246, avg=26292.80, stdev=7029.07 00:15:05.572 clat (usec): min=261, max=702, avg=371.69, stdev=53.97 00:15:05.572 lat (usec): min=286, max=745, avg=397.99, stdev=57.58 00:15:05.572 clat percentiles (usec): 00:15:05.572 | 1.00th=[ 277], 5.00th=[ 289], 10.00th=[ 306], 20.00th=[ 351], 00:15:05.572 | 30.00th=[ 363], 40.00th=[ 367], 50.00th=[ 371], 60.00th=[ 379], 00:15:05.572 | 70.00th=[ 383], 80.00th=[ 392], 90.00th=[ 408], 95.00th=[ 433], 00:15:05.572 | 99.00th=[ 652], 99.50th=[ 660], 99.90th=[ 701], 99.95th=[ 701], 00:15:05.572 | 99.99th=[ 701] 00:15:05.572 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:15:05.572 slat (nsec): min=13440, max=99823, avg=36200.41, stdev=8344.05 00:15:05.572 clat (usec): min=153, max=7555, avg=286.15, stdev=232.80 00:15:05.572 lat (usec): min=178, max=7582, avg=322.35, stdev=232.95 00:15:05.572 clat percentiles (usec): 00:15:05.572 | 1.00th=[ 161], 5.00th=[ 182], 10.00th=[ 200], 20.00th=[ 249], 00:15:05.572 | 30.00th=[ 262], 40.00th=[ 269], 50.00th=[ 277], 60.00th=[ 285], 00:15:05.572 | 70.00th=[ 293], 80.00th=[ 310], 90.00th=[ 330], 95.00th=[ 351], 00:15:05.572 | 99.00th=[ 424], 99.50th=[ 586], 99.90th=[ 4047], 99.95th=[ 7570], 00:15:05.572 | 99.99th=[ 7570] 00:15:05.572 bw ( KiB/s): min= 7008, max= 7008, per=20.80%, avg=7008.00, stdev= 0.00, samples=1 00:15:05.572 iops : min= 1752, max= 1752, avg=1752.00, stdev= 0.00, samples=1 00:15:05.572 lat (usec) : 250=11.28%, 500=87.25%, 750=1.29% 00:15:05.572 lat (msec) : 2=0.07%, 4=0.04%, 10=0.07% 00:15:05.572 cpu : usr=1.90%, sys=7.10%, ctx=2785, majf=0, minf=7 00:15:05.572 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:05.572 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:05.572 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:05.572 issued rwts: total=1248,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:05.572 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:05.572 job3: (groupid=0, jobs=1): err= 0: pid=70778: Thu Jul 25 08:57:12 2024 00:15:05.572 read: IOPS=2065, BW=8264KiB/s (8462kB/s)(8272KiB/1001msec) 00:15:05.572 slat (nsec): min=12454, max=96400, avg=18150.15, stdev=5454.22 00:15:05.572 clat (usec): min=163, max=2070, avg=216.56, stdev=49.92 00:15:05.572 lat (usec): min=195, max=2086, avg=234.71, stdev=50.49 00:15:05.572 clat percentiles (usec): 00:15:05.572 | 1.00th=[ 186], 5.00th=[ 192], 10.00th=[ 194], 20.00th=[ 200], 00:15:05.572 | 30.00th=[ 206], 40.00th=[ 210], 50.00th=[ 212], 60.00th=[ 217], 00:15:05.572 | 70.00th=[ 221], 80.00th=[ 227], 90.00th=[ 235], 95.00th=[ 243], 00:15:05.572 | 99.00th=[ 322], 99.50th=[ 334], 99.90th=[ 627], 99.95th=[ 1012], 00:15:05.572 | 99.99th=[ 2073] 00:15:05.572 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:15:05.572 slat (usec): min=14, max=120, avg=25.02, stdev= 6.47 00:15:05.572 clat (usec): min=125, max=349, avg=172.34, stdev=39.65 00:15:05.572 lat (usec): min=148, max=429, avg=197.36, stdev=41.14 00:15:05.572 clat percentiles (usec): 00:15:05.572 | 1.00th=[ 133], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 149], 00:15:05.572 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 161], 60.00th=[ 165], 00:15:05.572 | 70.00th=[ 169], 80.00th=[ 178], 90.00th=[ 245], 95.00th=[ 281], 00:15:05.572 | 99.00th=[ 306], 99.50th=[ 306], 99.90th=[ 322], 99.95th=[ 326], 00:15:05.572 | 99.99th=[ 351] 00:15:05.572 bw ( KiB/s): min=11456, max=11456, per=34.01%, avg=11456.00, stdev= 0.00, samples=1 00:15:05.572 iops : min= 2864, max= 2864, avg=2864.00, stdev= 0.00, samples=1 00:15:05.572 lat (usec) : 250=93.19%, 500=6.72%, 750=0.04% 00:15:05.572 lat (msec) : 2=0.02%, 4=0.02% 00:15:05.572 cpu : usr=1.40%, sys=8.70%, ctx=4631, majf=0, minf=14 00:15:05.572 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:05.572 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:05.572 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:05.572 issued rwts: total=2068,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:05.572 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:05.572 00:15:05.572 Run status group 0 (all jobs): 00:15:05.572 READ: bw=27.9MiB/s (29.2MB/s), 4987KiB/s-9.99MiB/s (5107kB/s-10.5MB/s), io=27.9MiB (29.3MB), run=1001-1001msec 00:15:05.572 WRITE: bw=32.9MiB/s (34.5MB/s), 6138KiB/s-10.9MiB/s (6285kB/s-11.4MB/s), io=32.9MiB (34.5MB), run=1001-1001msec 00:15:05.572 00:15:05.572 Disk stats (read/write): 00:15:05.572 nvme0n1: ios=2128/2560, merge=0/0, ticks=450/384, in_queue=834, util=87.55% 00:15:05.572 nvme0n2: ios=1039/1246, merge=0/0, ticks=416/392, in_queue=808, util=87.69% 00:15:05.572 nvme0n3: ios=1024/1310, merge=0/0, ticks=406/394, in_queue=800, util=88.68% 00:15:05.572 nvme0n4: ios=2022/2048, merge=0/0, ticks=433/352, in_queue=785, util=89.76% 00:15:05.572 08:57:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:15:05.572 [global] 00:15:05.572 thread=1 00:15:05.572 invalidate=1 00:15:05.572 rw=randwrite 00:15:05.572 time_based=1 00:15:05.572 runtime=1 00:15:05.572 ioengine=libaio 00:15:05.572 direct=1 00:15:05.572 bs=4096 00:15:05.572 iodepth=1 00:15:05.572 norandommap=0 00:15:05.572 numjobs=1 00:15:05.572 00:15:05.572 verify_dump=1 00:15:05.572 verify_backlog=512 00:15:05.572 verify_state_save=0 00:15:05.572 do_verify=1 00:15:05.572 verify=crc32c-intel 00:15:05.572 [job0] 00:15:05.572 filename=/dev/nvme0n1 00:15:05.572 [job1] 00:15:05.572 filename=/dev/nvme0n2 00:15:05.572 [job2] 00:15:05.572 filename=/dev/nvme0n3 00:15:05.572 [job3] 00:15:05.572 filename=/dev/nvme0n4 00:15:05.830 Could not set queue depth (nvme0n1) 00:15:05.830 Could not set queue depth (nvme0n2) 00:15:05.830 Could not set queue depth (nvme0n3) 00:15:05.830 Could not set queue depth (nvme0n4) 00:15:05.830 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:05.830 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:05.830 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:05.830 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:05.830 fio-3.35 00:15:05.830 Starting 4 threads 00:15:07.297 00:15:07.297 job0: (groupid=0, jobs=1): err= 0: pid=70831: Thu Jul 25 08:57:13 2024 00:15:07.297 read: IOPS=2560, BW=10.0MiB/s (10.5MB/s)(10.0MiB/1000msec) 00:15:07.297 slat (nsec): min=11498, max=39922, avg=14174.77, stdev=2431.52 00:15:07.297 clat (usec): min=171, max=279, avg=194.85, stdev=11.39 00:15:07.297 lat (usec): min=184, max=291, avg=209.03, stdev=12.03 00:15:07.297 clat percentiles (usec): 00:15:07.297 | 1.00th=[ 176], 5.00th=[ 180], 10.00th=[ 182], 20.00th=[ 186], 00:15:07.297 | 30.00th=[ 188], 40.00th=[ 190], 50.00th=[ 194], 60.00th=[ 196], 00:15:07.297 | 70.00th=[ 200], 80.00th=[ 204], 90.00th=[ 210], 95.00th=[ 217], 00:15:07.297 | 99.00th=[ 231], 99.50th=[ 235], 99.90th=[ 243], 99.95th=[ 277], 00:15:07.297 | 99.99th=[ 281] 00:15:07.297 write: IOPS=2774, BW=10.8MiB/s (11.4MB/s)(10.8MiB/1000msec); 0 zone resets 00:15:07.297 slat (nsec): min=16867, max=98480, avg=21241.11, stdev=4273.04 00:15:07.297 clat (usec): min=117, max=2535, avg=143.07, stdev=49.76 00:15:07.297 lat (usec): min=136, max=2558, avg=164.31, stdev=50.39 00:15:07.297 clat percentiles (usec): 00:15:07.297 | 1.00th=[ 122], 5.00th=[ 126], 10.00th=[ 128], 20.00th=[ 131], 00:15:07.297 | 30.00th=[ 133], 40.00th=[ 137], 50.00th=[ 139], 60.00th=[ 143], 00:15:07.297 | 70.00th=[ 147], 80.00th=[ 153], 90.00th=[ 161], 95.00th=[ 167], 00:15:07.297 | 99.00th=[ 182], 99.50th=[ 190], 99.90th=[ 498], 99.95th=[ 734], 00:15:07.297 | 99.99th=[ 2540] 00:15:07.297 bw ( KiB/s): min=12288, max=12288, per=36.58%, avg=12288.00, stdev= 0.00, samples=1 00:15:07.297 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:15:07.297 lat (usec) : 250=99.85%, 500=0.11%, 750=0.02% 00:15:07.297 lat (msec) : 4=0.02% 00:15:07.297 cpu : usr=2.10%, sys=7.60%, ctx=5336, majf=0, minf=11 00:15:07.297 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:07.297 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:07.297 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:07.297 issued rwts: total=2560,2774,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:07.297 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:07.297 job1: (groupid=0, jobs=1): err= 0: pid=70832: Thu Jul 25 08:57:13 2024 00:15:07.297 read: IOPS=1234, BW=4939KiB/s (5058kB/s)(4944KiB/1001msec) 00:15:07.297 slat (nsec): min=10987, max=57469, avg=19753.58, stdev=6274.54 00:15:07.297 clat (usec): min=337, max=1000, avg=383.06, stdev=32.76 00:15:07.297 lat (usec): min=355, max=1016, avg=402.82, stdev=33.69 00:15:07.297 clat percentiles (usec): 00:15:07.297 | 1.00th=[ 347], 5.00th=[ 355], 10.00th=[ 359], 20.00th=[ 367], 00:15:07.297 | 30.00th=[ 371], 40.00th=[ 375], 50.00th=[ 379], 60.00th=[ 383], 00:15:07.297 | 70.00th=[ 392], 80.00th=[ 396], 90.00th=[ 404], 95.00th=[ 412], 00:15:07.297 | 99.00th=[ 474], 99.50th=[ 586], 99.90th=[ 791], 99.95th=[ 1004], 00:15:07.297 | 99.99th=[ 1004] 00:15:07.297 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:15:07.297 slat (nsec): min=12662, max=70382, avg=25498.58, stdev=6537.08 00:15:07.297 clat (usec): min=246, max=651, avg=297.56, stdev=24.19 00:15:07.297 lat (usec): min=272, max=671, avg=323.06, stdev=23.31 00:15:07.297 clat percentiles (usec): 00:15:07.297 | 1.00th=[ 262], 5.00th=[ 269], 10.00th=[ 277], 20.00th=[ 281], 00:15:07.297 | 30.00th=[ 285], 40.00th=[ 289], 50.00th=[ 293], 60.00th=[ 302], 00:15:07.297 | 70.00th=[ 306], 80.00th=[ 314], 90.00th=[ 322], 95.00th=[ 330], 00:15:07.297 | 99.00th=[ 367], 99.50th=[ 379], 99.90th=[ 611], 99.95th=[ 652], 00:15:07.297 | 99.99th=[ 652] 00:15:07.297 bw ( KiB/s): min= 7208, max= 7208, per=21.46%, avg=7208.00, stdev= 0.00, samples=1 00:15:07.297 iops : min= 1802, max= 1802, avg=1802.00, stdev= 0.00, samples=1 00:15:07.297 lat (usec) : 250=0.14%, 500=99.31%, 750=0.47%, 1000=0.04% 00:15:07.297 lat (msec) : 2=0.04% 00:15:07.297 cpu : usr=1.70%, sys=5.20%, ctx=2779, majf=0, minf=8 00:15:07.297 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:07.297 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:07.297 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:07.297 issued rwts: total=1236,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:07.297 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:07.297 job2: (groupid=0, jobs=1): err= 0: pid=70833: Thu Jul 25 08:57:13 2024 00:15:07.297 read: IOPS=1234, BW=4939KiB/s (5058kB/s)(4944KiB/1001msec) 00:15:07.297 slat (nsec): min=11597, max=70318, avg=20529.40, stdev=6323.46 00:15:07.297 clat (usec): min=333, max=990, avg=382.28, stdev=33.79 00:15:07.297 lat (usec): min=356, max=1016, avg=402.81, stdev=33.75 00:15:07.297 clat percentiles (usec): 00:15:07.297 | 1.00th=[ 343], 5.00th=[ 351], 10.00th=[ 359], 20.00th=[ 367], 00:15:07.297 | 30.00th=[ 371], 40.00th=[ 375], 50.00th=[ 379], 60.00th=[ 383], 00:15:07.297 | 70.00th=[ 388], 80.00th=[ 396], 90.00th=[ 408], 95.00th=[ 412], 00:15:07.297 | 99.00th=[ 478], 99.50th=[ 603], 99.90th=[ 775], 99.95th=[ 988], 00:15:07.297 | 99.99th=[ 988] 00:15:07.297 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:15:07.297 slat (usec): min=15, max=116, avg=29.19, stdev= 8.10 00:15:07.297 clat (usec): min=241, max=624, avg=293.70, stdev=21.73 00:15:07.297 lat (usec): min=272, max=654, avg=322.89, stdev=22.12 00:15:07.297 clat percentiles (usec): 00:15:07.297 | 1.00th=[ 258], 5.00th=[ 269], 10.00th=[ 273], 20.00th=[ 281], 00:15:07.297 | 30.00th=[ 285], 40.00th=[ 289], 50.00th=[ 293], 60.00th=[ 297], 00:15:07.297 | 70.00th=[ 302], 80.00th=[ 306], 90.00th=[ 318], 95.00th=[ 322], 00:15:07.297 | 99.00th=[ 355], 99.50th=[ 379], 99.90th=[ 586], 99.95th=[ 627], 00:15:07.297 | 99.99th=[ 627] 00:15:07.297 bw ( KiB/s): min= 7208, max= 7208, per=21.46%, avg=7208.00, stdev= 0.00, samples=1 00:15:07.297 iops : min= 1802, max= 1802, avg=1802.00, stdev= 0.00, samples=1 00:15:07.297 lat (usec) : 250=0.18%, 500=99.28%, 750=0.47%, 1000=0.07% 00:15:07.297 cpu : usr=1.80%, sys=5.70%, ctx=2775, majf=0, minf=11 00:15:07.297 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:07.297 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:07.297 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:07.297 issued rwts: total=1236,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:07.297 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:07.297 job3: (groupid=0, jobs=1): err= 0: pid=70834: Thu Jul 25 08:57:13 2024 00:15:07.297 read: IOPS=2341, BW=9367KiB/s (9591kB/s)(9376KiB/1001msec) 00:15:07.297 slat (nsec): min=11273, max=33454, avg=13971.31, stdev=2423.51 00:15:07.297 clat (usec): min=177, max=1849, avg=209.56, stdev=39.01 00:15:07.297 lat (usec): min=189, max=1862, avg=223.53, stdev=39.32 00:15:07.297 clat percentiles (usec): 00:15:07.297 | 1.00th=[ 182], 5.00th=[ 186], 10.00th=[ 190], 20.00th=[ 194], 00:15:07.297 | 30.00th=[ 198], 40.00th=[ 202], 50.00th=[ 206], 60.00th=[ 212], 00:15:07.297 | 70.00th=[ 217], 80.00th=[ 223], 90.00th=[ 231], 95.00th=[ 239], 00:15:07.297 | 99.00th=[ 265], 99.50th=[ 289], 99.90th=[ 457], 99.95th=[ 490], 00:15:07.297 | 99.99th=[ 1844] 00:15:07.297 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:15:07.297 slat (nsec): min=13864, max=87445, avg=22863.29, stdev=6550.68 00:15:07.297 clat (usec): min=127, max=678, avg=159.74, stdev=22.65 00:15:07.297 lat (usec): min=145, max=714, avg=182.60, stdev=24.98 00:15:07.297 clat percentiles (usec): 00:15:07.297 | 1.00th=[ 133], 5.00th=[ 137], 10.00th=[ 141], 20.00th=[ 147], 00:15:07.297 | 30.00th=[ 151], 40.00th=[ 155], 50.00th=[ 157], 60.00th=[ 161], 00:15:07.297 | 70.00th=[ 165], 80.00th=[ 172], 90.00th=[ 180], 95.00th=[ 186], 00:15:07.297 | 99.00th=[ 208], 99.50th=[ 237], 99.90th=[ 502], 99.95th=[ 529], 00:15:07.297 | 99.99th=[ 676] 00:15:07.297 bw ( KiB/s): min=10888, max=10888, per=32.41%, avg=10888.00, stdev= 0.00, samples=1 00:15:07.297 iops : min= 2722, max= 2722, avg=2722.00, stdev= 0.00, samples=1 00:15:07.297 lat (usec) : 250=98.96%, 500=0.96%, 750=0.06% 00:15:07.297 lat (msec) : 2=0.02% 00:15:07.297 cpu : usr=1.70%, sys=7.60%, ctx=4906, majf=0, minf=15 00:15:07.297 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:07.297 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:07.297 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:07.297 issued rwts: total=2344,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:07.297 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:07.297 00:15:07.298 Run status group 0 (all jobs): 00:15:07.298 READ: bw=28.8MiB/s (30.2MB/s), 4939KiB/s-10.0MiB/s (5058kB/s-10.5MB/s), io=28.8MiB (30.2MB), run=1000-1001msec 00:15:07.298 WRITE: bw=32.8MiB/s (34.4MB/s), 6138KiB/s-10.8MiB/s (6285kB/s-11.4MB/s), io=32.8MiB (34.4MB), run=1000-1001msec 00:15:07.298 00:15:07.298 Disk stats (read/write): 00:15:07.298 nvme0n1: ios=2141/2560, merge=0/0, ticks=427/392, in_queue=819, util=88.48% 00:15:07.298 nvme0n2: ios=1071/1393, merge=0/0, ticks=388/390, in_queue=778, util=88.69% 00:15:07.298 nvme0n3: ios=1027/1393, merge=0/0, ticks=377/411, in_queue=788, util=89.21% 00:15:07.298 nvme0n4: ios=2048/2174, merge=0/0, ticks=444/376, in_queue=820, util=89.87% 00:15:07.298 08:57:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:15:07.298 [global] 00:15:07.298 thread=1 00:15:07.298 invalidate=1 00:15:07.298 rw=write 00:15:07.298 time_based=1 00:15:07.298 runtime=1 00:15:07.298 ioengine=libaio 00:15:07.298 direct=1 00:15:07.298 bs=4096 00:15:07.298 iodepth=128 00:15:07.298 norandommap=0 00:15:07.298 numjobs=1 00:15:07.298 00:15:07.298 verify_dump=1 00:15:07.298 verify_backlog=512 00:15:07.298 verify_state_save=0 00:15:07.298 do_verify=1 00:15:07.298 verify=crc32c-intel 00:15:07.298 [job0] 00:15:07.298 filename=/dev/nvme0n1 00:15:07.298 [job1] 00:15:07.298 filename=/dev/nvme0n2 00:15:07.298 [job2] 00:15:07.298 filename=/dev/nvme0n3 00:15:07.298 [job3] 00:15:07.298 filename=/dev/nvme0n4 00:15:07.298 Could not set queue depth (nvme0n1) 00:15:07.298 Could not set queue depth (nvme0n2) 00:15:07.298 Could not set queue depth (nvme0n3) 00:15:07.298 Could not set queue depth (nvme0n4) 00:15:07.298 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:07.298 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:07.298 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:07.298 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:07.298 fio-3.35 00:15:07.298 Starting 4 threads 00:15:08.232 00:15:08.232 job0: (groupid=0, jobs=1): err= 0: pid=70894: Thu Jul 25 08:57:15 2024 00:15:08.232 read: IOPS=1777, BW=7109KiB/s (7280kB/s)(7152KiB/1006msec) 00:15:08.232 slat (usec): min=7, max=10988, avg=238.25, stdev=972.13 00:15:08.232 clat (usec): min=5270, max=57659, avg=29810.45, stdev=12143.02 00:15:08.232 lat (usec): min=8567, max=62313, avg=30048.70, stdev=12225.06 00:15:08.232 clat percentiles (usec): 00:15:08.232 | 1.00th=[ 8848], 5.00th=[17695], 10.00th=[18482], 20.00th=[20055], 00:15:08.232 | 30.00th=[22152], 40.00th=[23462], 50.00th=[23987], 60.00th=[25822], 00:15:08.232 | 70.00th=[34866], 80.00th=[43254], 90.00th=[49021], 95.00th=[53216], 00:15:08.232 | 99.00th=[56886], 99.50th=[56886], 99.90th=[57410], 99.95th=[57410], 00:15:08.232 | 99.99th=[57410] 00:15:08.232 write: IOPS=2035, BW=8143KiB/s (8339kB/s)(8192KiB/1006msec); 0 zone resets 00:15:08.232 slat (usec): min=9, max=11664, avg=272.59, stdev=1078.86 00:15:08.232 clat (usec): min=16808, max=54399, avg=35230.49, stdev=9526.41 00:15:08.232 lat (usec): min=16822, max=54419, avg=35503.08, stdev=9567.65 00:15:08.232 clat percentiles (usec): 00:15:08.232 | 1.00th=[16909], 5.00th=[17433], 10.00th=[23987], 20.00th=[26084], 00:15:08.232 | 30.00th=[30802], 40.00th=[33424], 50.00th=[34866], 60.00th=[37487], 00:15:08.232 | 70.00th=[40633], 80.00th=[44303], 90.00th=[48497], 95.00th=[52691], 00:15:08.232 | 99.00th=[53216], 99.50th=[54264], 99.90th=[54264], 99.95th=[54264], 00:15:08.232 | 99.99th=[54264] 00:15:08.232 bw ( KiB/s): min= 7896, max= 8488, per=18.87%, avg=8192.00, stdev=418.61, samples=2 00:15:08.232 iops : min= 1974, max= 2122, avg=2048.00, stdev=104.65, samples=2 00:15:08.232 lat (msec) : 10=0.55%, 20=12.43%, 50=79.25%, 100=7.77% 00:15:08.232 cpu : usr=2.09%, sys=5.67%, ctx=334, majf=0, minf=15 00:15:08.232 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:15:08.232 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:08.232 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:08.232 issued rwts: total=1788,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:08.232 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:08.232 job1: (groupid=0, jobs=1): err= 0: pid=70895: Thu Jul 25 08:57:15 2024 00:15:08.232 read: IOPS=1603, BW=6416KiB/s (6570kB/s)(6480KiB/1010msec) 00:15:08.232 slat (usec): min=6, max=16792, avg=295.70, stdev=1368.72 00:15:08.232 clat (usec): min=8912, max=62191, avg=38242.52, stdev=11581.76 00:15:08.232 lat (usec): min=11404, max=62217, avg=38538.22, stdev=11573.33 00:15:08.232 clat percentiles (usec): 00:15:08.232 | 1.00th=[17695], 5.00th=[23462], 10.00th=[24249], 20.00th=[27657], 00:15:08.232 | 30.00th=[28705], 40.00th=[32113], 50.00th=[35914], 60.00th=[41157], 00:15:08.232 | 70.00th=[44827], 80.00th=[51119], 90.00th=[55313], 95.00th=[57410], 00:15:08.232 | 99.00th=[62129], 99.50th=[62129], 99.90th=[62129], 99.95th=[62129], 00:15:08.232 | 99.99th=[62129] 00:15:08.232 write: IOPS=2027, BW=8111KiB/s (8306kB/s)(8192KiB/1010msec); 0 zone resets 00:15:08.232 slat (usec): min=13, max=16179, avg=246.33, stdev=980.77 00:15:08.232 clat (usec): min=12349, max=69188, avg=31670.96, stdev=10281.70 00:15:08.232 lat (usec): min=16515, max=69209, avg=31917.29, stdev=10319.86 00:15:08.232 clat percentiles (usec): 00:15:08.232 | 1.00th=[16581], 5.00th=[18482], 10.00th=[18482], 20.00th=[19006], 00:15:08.232 | 30.00th=[25560], 40.00th=[30802], 50.00th=[33817], 60.00th=[34866], 00:15:08.232 | 70.00th=[35390], 80.00th=[38011], 90.00th=[43779], 95.00th=[50070], 00:15:08.232 | 99.00th=[62653], 99.50th=[64226], 99.90th=[68682], 99.95th=[68682], 00:15:08.232 | 99.99th=[68682] 00:15:08.232 bw ( KiB/s): min= 7848, max= 8192, per=18.48%, avg=8020.00, stdev=243.24, samples=2 00:15:08.232 iops : min= 1962, max= 2048, avg=2005.00, stdev=60.81, samples=2 00:15:08.232 lat (msec) : 10=0.03%, 20=14.78%, 50=72.44%, 100=12.76% 00:15:08.232 cpu : usr=1.68%, sys=6.14%, ctx=503, majf=0, minf=4 00:15:08.232 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:15:08.232 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:08.232 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:08.232 issued rwts: total=1620,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:08.232 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:08.232 job2: (groupid=0, jobs=1): err= 0: pid=70896: Thu Jul 25 08:57:15 2024 00:15:08.232 read: IOPS=5046, BW=19.7MiB/s (20.7MB/s)(19.8MiB/1002msec) 00:15:08.232 slat (usec): min=5, max=3033, avg=94.37, stdev=443.49 00:15:08.232 clat (usec): min=394, max=14216, avg=12540.46, stdev=1137.75 00:15:08.232 lat (usec): min=2837, max=14265, avg=12634.84, stdev=1048.81 00:15:08.232 clat percentiles (usec): 00:15:08.232 | 1.00th=[ 6194], 5.00th=[11863], 10.00th=[12125], 20.00th=[12387], 00:15:08.232 | 30.00th=[12518], 40.00th=[12649], 50.00th=[12649], 60.00th=[12780], 00:15:08.232 | 70.00th=[12911], 80.00th=[13042], 90.00th=[13304], 95.00th=[13566], 00:15:08.232 | 99.00th=[13960], 99.50th=[13960], 99.90th=[14091], 99.95th=[14222], 00:15:08.232 | 99.99th=[14222] 00:15:08.232 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:15:08.232 slat (usec): min=10, max=2997, avg=94.42, stdev=406.00 00:15:08.232 clat (usec): min=9329, max=13514, avg=12334.62, stdev=567.27 00:15:08.232 lat (usec): min=10306, max=13535, avg=12429.04, stdev=401.45 00:15:08.232 clat percentiles (usec): 00:15:08.232 | 1.00th=[ 9896], 5.00th=[11731], 10.00th=[11863], 20.00th=[11994], 00:15:08.232 | 30.00th=[12125], 40.00th=[12256], 50.00th=[12387], 60.00th=[12387], 00:15:08.232 | 70.00th=[12518], 80.00th=[12649], 90.00th=[13042], 95.00th=[13173], 00:15:08.232 | 99.00th=[13435], 99.50th=[13435], 99.90th=[13435], 99.95th=[13566], 00:15:08.232 | 99.99th=[13566] 00:15:08.232 bw ( KiB/s): min=20480, max=20521, per=47.23%, avg=20500.50, stdev=28.99, samples=2 00:15:08.232 iops : min= 5120, max= 5130, avg=5125.00, stdev= 7.07, samples=2 00:15:08.232 lat (usec) : 500=0.01% 00:15:08.232 lat (msec) : 4=0.31%, 10=1.78%, 20=97.90% 00:15:08.232 cpu : usr=4.20%, sys=14.19%, ctx=319, majf=0, minf=7 00:15:08.232 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:15:08.232 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:08.232 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:08.232 issued rwts: total=5057,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:08.232 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:08.232 job3: (groupid=0, jobs=1): err= 0: pid=70897: Thu Jul 25 08:57:15 2024 00:15:08.233 read: IOPS=1522, BW=6089KiB/s (6235kB/s)(6144KiB/1009msec) 00:15:08.233 slat (usec): min=7, max=13671, avg=351.62, stdev=1291.90 00:15:08.233 clat (usec): min=28288, max=59305, avg=44709.78, stdev=7785.24 00:15:08.233 lat (usec): min=28427, max=63202, avg=45061.40, stdev=7804.17 00:15:08.233 clat percentiles (usec): 00:15:08.233 | 1.00th=[28705], 5.00th=[30278], 10.00th=[34866], 20.00th=[38011], 00:15:08.233 | 30.00th=[40633], 40.00th=[42206], 50.00th=[44303], 60.00th=[46924], 00:15:08.233 | 70.00th=[50594], 80.00th=[52691], 90.00th=[55313], 95.00th=[55837], 00:15:08.233 | 99.00th=[57934], 99.50th=[57934], 99.90th=[59507], 99.95th=[59507], 00:15:08.233 | 99.99th=[59507] 00:15:08.233 write: IOPS=1728, BW=6914KiB/s (7080kB/s)(6976KiB/1009msec); 0 zone resets 00:15:08.233 slat (usec): min=13, max=12777, avg=257.01, stdev=1052.16 00:15:08.233 clat (usec): min=5595, max=59921, avg=33408.75, stdev=9411.74 00:15:08.233 lat (usec): min=8930, max=59945, avg=33665.75, stdev=9432.52 00:15:08.233 clat percentiles (usec): 00:15:08.233 | 1.00th=[15008], 5.00th=[22676], 10.00th=[24773], 20.00th=[26346], 00:15:08.233 | 30.00th=[26608], 40.00th=[30016], 50.00th=[31851], 60.00th=[33162], 00:15:08.233 | 70.00th=[35390], 80.00th=[39060], 90.00th=[49546], 95.00th=[53740], 00:15:08.233 | 99.00th=[58459], 99.50th=[58459], 99.90th=[60031], 99.95th=[60031], 00:15:08.233 | 99.99th=[60031] 00:15:08.233 bw ( KiB/s): min= 4736, max= 8208, per=14.91%, avg=6472.00, stdev=2455.07, samples=2 00:15:08.233 iops : min= 1184, max= 2052, avg=1618.00, stdev=613.77, samples=2 00:15:08.233 lat (msec) : 10=0.27%, 20=0.95%, 50=79.21%, 100=19.57% 00:15:08.233 cpu : usr=1.98%, sys=5.26%, ctx=500, majf=0, minf=7 00:15:08.233 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:15:08.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:08.233 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:08.233 issued rwts: total=1536,1744,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:08.233 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:08.233 00:15:08.233 Run status group 0 (all jobs): 00:15:08.233 READ: bw=38.7MiB/s (40.6MB/s), 6089KiB/s-19.7MiB/s (6235kB/s-20.7MB/s), io=39.1MiB (41.0MB), run=1002-1010msec 00:15:08.233 WRITE: bw=42.4MiB/s (44.4MB/s), 6914KiB/s-20.0MiB/s (7080kB/s-20.9MB/s), io=42.8MiB (44.9MB), run=1002-1010msec 00:15:08.233 00:15:08.233 Disk stats (read/write): 00:15:08.233 nvme0n1: ios=1586/1639, merge=0/0, ticks=15738/18836, in_queue=34574, util=88.57% 00:15:08.233 nvme0n2: ios=1585/1727, merge=0/0, ticks=14297/11893, in_queue=26190, util=89.19% 00:15:08.233 nvme0n3: ios=4219/4608, merge=0/0, ticks=11838/12598, in_queue=24436, util=89.83% 00:15:08.233 nvme0n4: ios=1340/1536, merge=0/0, ticks=17663/13297, in_queue=30960, util=89.26% 00:15:08.491 08:57:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:15:08.491 [global] 00:15:08.491 thread=1 00:15:08.491 invalidate=1 00:15:08.491 rw=randwrite 00:15:08.491 time_based=1 00:15:08.491 runtime=1 00:15:08.491 ioengine=libaio 00:15:08.491 direct=1 00:15:08.491 bs=4096 00:15:08.491 iodepth=128 00:15:08.491 norandommap=0 00:15:08.491 numjobs=1 00:15:08.491 00:15:08.491 verify_dump=1 00:15:08.491 verify_backlog=512 00:15:08.491 verify_state_save=0 00:15:08.491 do_verify=1 00:15:08.491 verify=crc32c-intel 00:15:08.491 [job0] 00:15:08.491 filename=/dev/nvme0n1 00:15:08.491 [job1] 00:15:08.491 filename=/dev/nvme0n2 00:15:08.491 [job2] 00:15:08.491 filename=/dev/nvme0n3 00:15:08.491 [job3] 00:15:08.491 filename=/dev/nvme0n4 00:15:08.491 Could not set queue depth (nvme0n1) 00:15:08.491 Could not set queue depth (nvme0n2) 00:15:08.491 Could not set queue depth (nvme0n3) 00:15:08.491 Could not set queue depth (nvme0n4) 00:15:08.491 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:08.491 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:08.491 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:08.491 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:08.491 fio-3.35 00:15:08.491 Starting 4 threads 00:15:09.865 00:15:09.865 job0: (groupid=0, jobs=1): err= 0: pid=70951: Thu Jul 25 08:57:16 2024 00:15:09.865 read: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec) 00:15:09.865 slat (usec): min=6, max=9516, avg=97.84, stdev=627.88 00:15:09.865 clat (usec): min=7689, max=28485, avg=13618.45, stdev=2091.42 00:15:09.865 lat (usec): min=7702, max=28502, avg=13716.29, stdev=2117.80 00:15:09.865 clat percentiles (usec): 00:15:09.865 | 1.00th=[ 8225], 5.00th=[11600], 10.00th=[12256], 20.00th=[12518], 00:15:09.865 | 30.00th=[12780], 40.00th=[13042], 50.00th=[13435], 60.00th=[13698], 00:15:09.865 | 70.00th=[13960], 80.00th=[14222], 90.00th=[15139], 95.00th=[18482], 00:15:09.865 | 99.00th=[20841], 99.50th=[21365], 99.90th=[23200], 99.95th=[23200], 00:15:09.865 | 99.99th=[28443] 00:15:09.865 write: IOPS=5108, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:15:09.865 slat (usec): min=10, max=9555, avg=100.02, stdev=592.83 00:15:09.865 clat (usec): min=614, max=18700, avg=12532.18, stdev=2094.13 00:15:09.865 lat (usec): min=5035, max=18735, avg=12632.20, stdev=2033.71 00:15:09.865 clat percentiles (usec): 00:15:09.865 | 1.00th=[ 6128], 5.00th=[10552], 10.00th=[10945], 20.00th=[11469], 00:15:09.865 | 30.00th=[11731], 40.00th=[11863], 50.00th=[12125], 60.00th=[12518], 00:15:09.865 | 70.00th=[12780], 80.00th=[13173], 90.00th=[14484], 95.00th=[17695], 00:15:09.865 | 99.00th=[18220], 99.50th=[18220], 99.90th=[18482], 99.95th=[18482], 00:15:09.865 | 99.99th=[18744] 00:15:09.865 bw ( KiB/s): min=19968, max=20000, per=34.14%, avg=19984.00, stdev=22.63, samples=2 00:15:09.865 iops : min= 4992, max= 5000, avg=4996.00, stdev= 5.66, samples=2 00:15:09.865 lat (usec) : 750=0.01% 00:15:09.865 lat (msec) : 10=3.41%, 20=94.86%, 50=1.72% 00:15:09.865 cpu : usr=5.00%, sys=12.69%, ctx=258, majf=0, minf=9 00:15:09.865 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:15:09.865 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:09.865 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:09.865 issued rwts: total=4608,5119,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:09.865 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:09.865 job1: (groupid=0, jobs=1): err= 0: pid=70952: Thu Jul 25 08:57:16 2024 00:15:09.865 read: IOPS=2029, BW=8119KiB/s (8314kB/s)(8192KiB/1009msec) 00:15:09.865 slat (usec): min=7, max=9702, avg=228.94, stdev=909.02 00:15:09.865 clat (usec): min=18034, max=39522, avg=28573.85, stdev=3835.50 00:15:09.865 lat (usec): min=18048, max=39538, avg=28802.79, stdev=3858.36 00:15:09.865 clat percentiles (usec): 00:15:09.865 | 1.00th=[20055], 5.00th=[22414], 10.00th=[24773], 20.00th=[26346], 00:15:09.865 | 30.00th=[27395], 40.00th=[27657], 50.00th=[27919], 60.00th=[28181], 00:15:09.865 | 70.00th=[28967], 80.00th=[31065], 90.00th=[34341], 95.00th=[36963], 00:15:09.865 | 99.00th=[38536], 99.50th=[39060], 99.90th=[39584], 99.95th=[39584], 00:15:09.865 | 99.99th=[39584] 00:15:09.865 write: IOPS=2487, BW=9950KiB/s (10.2MB/s)(9.80MiB/1009msec); 0 zone resets 00:15:09.865 slat (usec): min=5, max=11157, avg=204.31, stdev=792.96 00:15:09.865 clat (usec): min=7686, max=40110, avg=27196.65, stdev=4399.48 00:15:09.865 lat (usec): min=9107, max=45678, avg=27400.97, stdev=4462.52 00:15:09.865 clat percentiles (usec): 00:15:09.865 | 1.00th=[15008], 5.00th=[17957], 10.00th=[20841], 20.00th=[25297], 00:15:09.865 | 30.00th=[26608], 40.00th=[26870], 50.00th=[27919], 60.00th=[28443], 00:15:09.865 | 70.00th=[28705], 80.00th=[28967], 90.00th=[32113], 95.00th=[34341], 00:15:09.865 | 99.00th=[38536], 99.50th=[39060], 99.90th=[40109], 99.95th=[40109], 00:15:09.865 | 99.99th=[40109] 00:15:09.865 bw ( KiB/s): min= 8934, max=10112, per=16.27%, avg=9523.00, stdev=832.97, samples=2 00:15:09.865 iops : min= 2233, max= 2528, avg=2380.50, stdev=208.60, samples=2 00:15:09.865 lat (msec) : 10=0.20%, 20=4.67%, 50=95.13% 00:15:09.865 cpu : usr=1.98%, sys=6.75%, ctx=811, majf=0, minf=15 00:15:09.865 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:15:09.865 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:09.865 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:09.865 issued rwts: total=2048,2510,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:09.865 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:09.865 job2: (groupid=0, jobs=1): err= 0: pid=70953: Thu Jul 25 08:57:16 2024 00:15:09.865 read: IOPS=4396, BW=17.2MiB/s (18.0MB/s)(17.3MiB/1006msec) 00:15:09.865 slat (usec): min=9, max=7104, avg=105.23, stdev=669.16 00:15:09.865 clat (usec): min=1434, max=24426, avg=14586.57, stdev=1887.31 00:15:09.866 lat (usec): min=6390, max=29153, avg=14691.80, stdev=1907.00 00:15:09.866 clat percentiles (usec): 00:15:09.866 | 1.00th=[ 7111], 5.00th=[10290], 10.00th=[13960], 20.00th=[14222], 00:15:09.866 | 30.00th=[14353], 40.00th=[14484], 50.00th=[14615], 60.00th=[14877], 00:15:09.866 | 70.00th=[15008], 80.00th=[15270], 90.00th=[15664], 95.00th=[16188], 00:15:09.866 | 99.00th=[22676], 99.50th=[23200], 99.90th=[24511], 99.95th=[24511], 00:15:09.866 | 99.99th=[24511] 00:15:09.866 write: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec); 0 zone resets 00:15:09.866 slat (usec): min=8, max=11535, avg=108.99, stdev=669.35 00:15:09.866 clat (usec): min=7130, max=20427, avg=13668.09, stdev=1355.55 00:15:09.866 lat (usec): min=9706, max=20450, avg=13777.09, stdev=1220.63 00:15:09.866 clat percentiles (usec): 00:15:09.866 | 1.00th=[ 8717], 5.00th=[12256], 10.00th=[12518], 20.00th=[12911], 00:15:09.866 | 30.00th=[13304], 40.00th=[13566], 50.00th=[13698], 60.00th=[13829], 00:15:09.866 | 70.00th=[14091], 80.00th=[14353], 90.00th=[14484], 95.00th=[15008], 00:15:09.866 | 99.00th=[20055], 99.50th=[20055], 99.90th=[20317], 99.95th=[20317], 00:15:09.866 | 99.99th=[20317] 00:15:09.866 bw ( KiB/s): min=17928, max=18973, per=31.51%, avg=18450.50, stdev=738.93, samples=2 00:15:09.866 iops : min= 4482, max= 4743, avg=4612.50, stdev=184.55, samples=2 00:15:09.866 lat (msec) : 2=0.01%, 10=3.19%, 20=95.56%, 50=1.24% 00:15:09.866 cpu : usr=4.68%, sys=11.64%, ctx=202, majf=0, minf=9 00:15:09.866 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:15:09.866 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:09.866 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:09.866 issued rwts: total=4423,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:09.866 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:09.866 job3: (groupid=0, jobs=1): err= 0: pid=70954: Thu Jul 25 08:57:16 2024 00:15:09.866 read: IOPS=2192, BW=8772KiB/s (8982kB/s)(8868KiB/1011msec) 00:15:09.866 slat (usec): min=5, max=15107, avg=227.30, stdev=849.78 00:15:09.866 clat (usec): min=9028, max=41892, avg=28353.80, stdev=4771.36 00:15:09.866 lat (usec): min=12010, max=41942, avg=28581.10, stdev=4776.21 00:15:09.866 clat percentiles (usec): 00:15:09.866 | 1.00th=[15401], 5.00th=[21365], 10.00th=[23200], 20.00th=[25035], 00:15:09.866 | 30.00th=[26870], 40.00th=[27395], 50.00th=[27919], 60.00th=[28181], 00:15:09.866 | 70.00th=[28967], 80.00th=[31589], 90.00th=[35914], 95.00th=[38011], 00:15:09.866 | 99.00th=[40109], 99.50th=[40633], 99.90th=[41681], 99.95th=[41681], 00:15:09.866 | 99.99th=[41681] 00:15:09.866 write: IOPS=2532, BW=9.89MiB/s (10.4MB/s)(10.0MiB/1011msec); 0 zone resets 00:15:09.866 slat (usec): min=5, max=11752, avg=186.57, stdev=767.75 00:15:09.866 clat (usec): min=9710, max=40711, avg=25193.56, stdev=4498.00 00:15:09.866 lat (usec): min=9735, max=40719, avg=25380.13, stdev=4464.93 00:15:09.866 clat percentiles (usec): 00:15:09.866 | 1.00th=[12125], 5.00th=[16450], 10.00th=[18744], 20.00th=[21365], 00:15:09.866 | 30.00th=[23200], 40.00th=[25297], 50.00th=[26346], 60.00th=[27395], 00:15:09.866 | 70.00th=[28443], 80.00th=[28705], 90.00th=[29492], 95.00th=[30540], 00:15:09.866 | 99.00th=[32637], 99.50th=[33424], 99.90th=[34341], 99.95th=[34341], 00:15:09.866 | 99.99th=[40633] 00:15:09.866 bw ( KiB/s): min=10016, max=10464, per=17.49%, avg=10240.00, stdev=316.78, samples=2 00:15:09.866 iops : min= 2504, max= 2616, avg=2560.00, stdev=79.20, samples=2 00:15:09.866 lat (msec) : 10=0.08%, 20=8.75%, 50=91.17% 00:15:09.866 cpu : usr=1.98%, sys=7.52%, ctx=779, majf=0, minf=15 00:15:09.866 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:15:09.866 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:09.866 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:09.866 issued rwts: total=2217,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:09.866 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:09.866 00:15:09.866 Run status group 0 (all jobs): 00:15:09.866 READ: bw=51.4MiB/s (53.9MB/s), 8119KiB/s-18.0MiB/s (8314kB/s-18.8MB/s), io=51.9MiB (54.5MB), run=1002-1011msec 00:15:09.866 WRITE: bw=57.2MiB/s (59.9MB/s), 9950KiB/s-20.0MiB/s (10.2MB/s-20.9MB/s), io=57.8MiB (60.6MB), run=1002-1011msec 00:15:09.866 00:15:09.866 Disk stats (read/write): 00:15:09.866 nvme0n1: ios=4097/4096, merge=0/0, ticks=52769/48309, in_queue=101078, util=88.86% 00:15:09.866 nvme0n2: ios=1812/2048, merge=0/0, ticks=25882/25613, in_queue=51495, util=87.08% 00:15:09.866 nvme0n3: ios=3590/4032, merge=0/0, ticks=49945/51450, in_queue=101395, util=89.13% 00:15:09.866 nvme0n4: ios=1963/2048, merge=0/0, ticks=27852/23994, in_queue=51846, util=89.79% 00:15:09.866 08:57:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:15:09.866 08:57:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=70967 00:15:09.866 08:57:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:15:09.866 08:57:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:15:09.866 [global] 00:15:09.866 thread=1 00:15:09.866 invalidate=1 00:15:09.866 rw=read 00:15:09.866 time_based=1 00:15:09.866 runtime=10 00:15:09.866 ioengine=libaio 00:15:09.866 direct=1 00:15:09.866 bs=4096 00:15:09.866 iodepth=1 00:15:09.866 norandommap=1 00:15:09.866 numjobs=1 00:15:09.866 00:15:09.866 [job0] 00:15:09.866 filename=/dev/nvme0n1 00:15:09.866 [job1] 00:15:09.866 filename=/dev/nvme0n2 00:15:09.866 [job2] 00:15:09.866 filename=/dev/nvme0n3 00:15:09.866 [job3] 00:15:09.866 filename=/dev/nvme0n4 00:15:09.866 Could not set queue depth (nvme0n1) 00:15:09.866 Could not set queue depth (nvme0n2) 00:15:09.866 Could not set queue depth (nvme0n3) 00:15:09.866 Could not set queue depth (nvme0n4) 00:15:09.866 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:09.866 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:09.866 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:09.866 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:09.866 fio-3.35 00:15:09.866 Starting 4 threads 00:15:13.152 08:57:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:15:13.152 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=33054720, buflen=4096 00:15:13.152 fio: pid=71014, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:13.152 08:57:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:15:13.414 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=36995072, buflen=4096 00:15:13.414 fio: pid=71013, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:13.414 08:57:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:13.414 08:57:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:15:13.672 fio: pid=71007, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:13.672 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=66920448, buflen=4096 00:15:13.672 08:57:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:13.672 08:57:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:15:13.930 fio: pid=71008, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:13.930 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=7516160, buflen=4096 00:15:13.930 00:15:13.930 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=71007: Thu Jul 25 08:57:21 2024 00:15:13.930 read: IOPS=4678, BW=18.3MiB/s (19.2MB/s)(63.8MiB/3492msec) 00:15:13.930 slat (usec): min=10, max=12458, avg=17.29, stdev=170.60 00:15:13.930 clat (usec): min=167, max=2673, avg=195.08, stdev=46.67 00:15:13.930 lat (usec): min=182, max=12686, avg=212.37, stdev=177.52 00:15:13.930 clat percentiles (usec): 00:15:13.930 | 1.00th=[ 176], 5.00th=[ 180], 10.00th=[ 182], 20.00th=[ 184], 00:15:13.930 | 30.00th=[ 188], 40.00th=[ 190], 50.00th=[ 192], 60.00th=[ 194], 00:15:13.930 | 70.00th=[ 198], 80.00th=[ 202], 90.00th=[ 208], 95.00th=[ 215], 00:15:13.930 | 99.00th=[ 233], 99.50th=[ 306], 99.90th=[ 758], 99.95th=[ 1205], 00:15:13.930 | 99.99th=[ 2311] 00:15:13.930 bw ( KiB/s): min=17784, max=19352, per=35.61%, avg=18894.00, stdev=564.67, samples=6 00:15:13.930 iops : min= 4446, max= 4838, avg=4723.33, stdev=141.12, samples=6 00:15:13.930 lat (usec) : 250=99.35%, 500=0.45%, 750=0.09%, 1000=0.03% 00:15:13.930 lat (msec) : 2=0.06%, 4=0.02% 00:15:13.930 cpu : usr=1.09%, sys=6.13%, ctx=16347, majf=0, minf=1 00:15:13.930 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:13.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:13.930 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:13.930 issued rwts: total=16339,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:13.930 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:13.930 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=71008: Thu Jul 25 08:57:21 2024 00:15:13.930 read: IOPS=4678, BW=18.3MiB/s (19.2MB/s)(71.2MiB/3894msec) 00:15:13.930 slat (usec): min=10, max=11867, avg=15.58, stdev=158.70 00:15:13.930 clat (usec): min=163, max=2812, avg=196.96, stdev=32.63 00:15:13.930 lat (usec): min=181, max=12149, avg=212.54, stdev=162.81 00:15:13.930 clat percentiles (usec): 00:15:13.930 | 1.00th=[ 178], 5.00th=[ 182], 10.00th=[ 184], 20.00th=[ 188], 00:15:13.930 | 30.00th=[ 190], 40.00th=[ 192], 50.00th=[ 194], 60.00th=[ 198], 00:15:13.930 | 70.00th=[ 200], 80.00th=[ 204], 90.00th=[ 210], 95.00th=[ 217], 00:15:13.930 | 99.00th=[ 235], 99.50th=[ 249], 99.90th=[ 469], 99.95th=[ 553], 00:15:13.930 | 99.99th=[ 2008] 00:15:13.930 bw ( KiB/s): min=17849, max=19120, per=35.24%, avg=18701.43, stdev=451.64, samples=7 00:15:13.930 iops : min= 4462, max= 4780, avg=4675.14, stdev=112.91, samples=7 00:15:13.930 lat (usec) : 250=99.52%, 500=0.39%, 750=0.06% 00:15:13.930 lat (msec) : 2=0.02%, 4=0.01% 00:15:13.930 cpu : usr=1.18%, sys=5.19%, ctx=18227, majf=0, minf=1 00:15:13.930 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:13.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:13.930 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:13.930 issued rwts: total=18220,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:13.930 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:13.930 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=71013: Thu Jul 25 08:57:21 2024 00:15:13.930 read: IOPS=2815, BW=11.0MiB/s (11.5MB/s)(35.3MiB/3208msec) 00:15:13.930 slat (usec): min=8, max=10391, avg=18.79, stdev=142.54 00:15:13.930 clat (usec): min=174, max=4106, avg=334.58, stdev=74.01 00:15:13.930 lat (usec): min=189, max=10659, avg=353.37, stdev=160.15 00:15:13.930 clat percentiles (usec): 00:15:13.930 | 1.00th=[ 188], 5.00th=[ 210], 10.00th=[ 255], 20.00th=[ 318], 00:15:13.930 | 30.00th=[ 326], 40.00th=[ 334], 50.00th=[ 338], 60.00th=[ 343], 00:15:13.930 | 70.00th=[ 351], 80.00th=[ 355], 90.00th=[ 367], 95.00th=[ 388], 00:15:13.930 | 99.00th=[ 510], 99.50th=[ 537], 99.90th=[ 725], 99.95th=[ 988], 00:15:13.930 | 99.99th=[ 4113] 00:15:13.930 bw ( KiB/s): min=10091, max=11368, per=20.73%, avg=10999.50, stdev=472.01, samples=6 00:15:13.930 iops : min= 2522, max= 2842, avg=2749.67, stdev=118.25, samples=6 00:15:13.930 lat (usec) : 250=9.87%, 500=88.79%, 750=1.24%, 1000=0.04% 00:15:13.930 lat (msec) : 2=0.02%, 4=0.01%, 10=0.01% 00:15:13.930 cpu : usr=0.75%, sys=4.74%, ctx=9036, majf=0, minf=1 00:15:13.930 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:13.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:13.930 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:13.930 issued rwts: total=9033,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:13.930 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:13.930 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=71014: Thu Jul 25 08:57:21 2024 00:15:13.930 read: IOPS=2728, BW=10.7MiB/s (11.2MB/s)(31.5MiB/2958msec) 00:15:13.930 slat (nsec): min=8757, max=70067, avg=18559.65, stdev=5679.25 00:15:13.930 clat (usec): min=278, max=2424, avg=345.85, stdev=46.49 00:15:13.930 lat (usec): min=293, max=2440, avg=364.41, stdev=46.79 00:15:13.930 clat percentiles (usec): 00:15:13.930 | 1.00th=[ 297], 5.00th=[ 310], 10.00th=[ 318], 20.00th=[ 326], 00:15:13.930 | 30.00th=[ 330], 40.00th=[ 334], 50.00th=[ 338], 60.00th=[ 343], 00:15:13.930 | 70.00th=[ 351], 80.00th=[ 359], 90.00th=[ 371], 95.00th=[ 396], 00:15:13.930 | 99.00th=[ 515], 99.50th=[ 537], 99.90th=[ 709], 99.95th=[ 758], 00:15:13.930 | 99.99th=[ 2409] 00:15:13.930 bw ( KiB/s): min=10099, max=11360, per=20.60%, avg=10932.00, stdev=489.13, samples=5 00:15:13.930 iops : min= 2524, max= 2840, avg=2732.80, stdev=122.59, samples=5 00:15:13.930 lat (usec) : 500=98.69%, 750=1.24%, 1000=0.04% 00:15:13.930 lat (msec) : 2=0.01%, 4=0.01% 00:15:13.930 cpu : usr=1.29%, sys=4.73%, ctx=8071, majf=0, minf=1 00:15:13.930 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:13.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:13.930 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:13.930 issued rwts: total=8071,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:13.930 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:13.930 00:15:13.930 Run status group 0 (all jobs): 00:15:13.931 READ: bw=51.8MiB/s (54.3MB/s), 10.7MiB/s-18.3MiB/s (11.2MB/s-19.2MB/s), io=202MiB (212MB), run=2958-3894msec 00:15:13.931 00:15:13.931 Disk stats (read/write): 00:15:13.931 nvme0n1: ios=15688/0, merge=0/0, ticks=3124/0, in_queue=3124, util=94.74% 00:15:13.931 nvme0n2: ios=18029/0, merge=0/0, ticks=3595/0, in_queue=3595, util=95.43% 00:15:13.931 nvme0n3: ios=8567/0, merge=0/0, ticks=2738/0, in_queue=2738, util=96.22% 00:15:13.931 nvme0n4: ios=7781/0, merge=0/0, ticks=2587/0, in_queue=2587, util=96.81% 00:15:14.188 08:57:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:14.188 08:57:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:15:14.754 08:57:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:14.754 08:57:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:15:15.012 08:57:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:15.012 08:57:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:15:15.270 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:15.270 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:15:15.838 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:15.838 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:15:16.406 08:57:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:15:16.406 08:57:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 70967 00:15:16.406 08:57:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:15:16.406 08:57:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:16.406 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:16.406 08:57:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:16.406 08:57:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:15:16.406 08:57:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:16.406 08:57:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:16.406 08:57:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:16.406 08:57:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:16.406 nvmf hotplug test: fio failed as expected 00:15:16.406 08:57:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:15:16.406 08:57:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:15:16.406 08:57:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:15:16.406 08:57:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:16.664 08:57:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:15:16.664 08:57:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:15:16.664 08:57:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:15:16.664 08:57:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:15:16.664 08:57:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:15:16.664 08:57:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:16.664 08:57:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:15:16.664 08:57:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:16.664 08:57:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:15:16.664 08:57:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:16.664 08:57:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:16.664 rmmod nvme_tcp 00:15:16.664 rmmod nvme_fabrics 00:15:16.664 rmmod nvme_keyring 00:15:16.664 08:57:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:16.664 08:57:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:15:16.664 08:57:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:15:16.664 08:57:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 70584 ']' 00:15:16.664 08:57:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 70584 00:15:16.664 08:57:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 70584 ']' 00:15:16.664 08:57:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 70584 00:15:16.664 08:57:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:15:16.664 08:57:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:16.664 08:57:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70584 00:15:16.664 killing process with pid 70584 00:15:16.664 08:57:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:16.664 08:57:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:16.664 08:57:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70584' 00:15:16.664 08:57:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 70584 00:15:16.664 08:57:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 70584 00:15:18.040 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:18.040 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:18.040 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:18.040 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:18.040 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:18.040 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:18.040 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:18.040 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:18.040 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:18.040 00:15:18.040 real 0m22.020s 00:15:18.040 user 1m20.532s 00:15:18.040 sys 0m10.679s 00:15:18.040 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:18.040 ************************************ 00:15:18.040 END TEST nvmf_fio_target 00:15:18.040 ************************************ 00:15:18.040 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.040 08:57:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:15:18.040 08:57:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:18.040 08:57:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:18.040 08:57:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:15:18.040 ************************************ 00:15:18.040 START TEST nvmf_bdevio 00:15:18.040 ************************************ 00:15:18.040 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:15:18.040 * Looking for test storage... 00:15:18.040 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:18.040 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:18.040 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:15:18.040 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:18.040 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:18.040 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:18.040 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:18.040 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:18.040 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:18.040 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:18.040 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:18.040 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:18.040 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:18.040 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:15:18.040 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:15:18.040 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:18.040 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:18.040 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:18.040 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:18.040 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:18.040 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:18.040 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:18.040 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:18.040 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.040 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.040 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.040 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:15:18.040 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.040 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:15:18.040 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:18.040 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:18.040 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:18.040 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:18.040 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:18.040 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:18.040 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:18.040 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:18.040 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:18.041 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:18.041 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:15:18.041 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:18.041 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:18.041 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:18.041 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:18.041 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:18.041 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:18.041 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:18.041 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:18.041 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:18.041 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:18.041 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:18.041 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:18.041 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:18.041 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:18.041 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:18.041 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:18.041 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:18.041 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:18.041 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:18.041 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:18.041 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:18.041 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:18.041 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:18.041 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:18.041 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:18.041 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:18.041 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:18.041 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:18.041 Cannot find device "nvmf_tgt_br" 00:15:18.041 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # true 00:15:18.041 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:18.041 Cannot find device "nvmf_tgt_br2" 00:15:18.041 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # true 00:15:18.041 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:18.041 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:18.300 Cannot find device "nvmf_tgt_br" 00:15:18.300 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # true 00:15:18.300 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:18.300 Cannot find device "nvmf_tgt_br2" 00:15:18.300 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # true 00:15:18.300 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:18.300 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:18.300 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:18.300 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:18.300 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:15:18.300 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:18.300 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:18.300 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:15:18.300 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:18.300 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:18.300 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:18.300 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:18.300 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:18.300 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:18.300 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:18.300 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:18.300 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:18.300 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:18.300 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:18.300 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:18.300 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:18.300 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:18.300 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:18.300 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:18.300 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:18.300 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:18.300 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:18.300 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:18.300 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:18.300 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:18.300 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:18.300 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:18.300 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:18.300 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:15:18.300 00:15:18.300 --- 10.0.0.2 ping statistics --- 00:15:18.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:18.300 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:15:18.300 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:18.300 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:18.300 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:15:18.300 00:15:18.300 --- 10.0.0.3 ping statistics --- 00:15:18.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:18.300 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:15:18.300 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:18.300 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:18.300 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:15:18.300 00:15:18.300 --- 10.0.0.1 ping statistics --- 00:15:18.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:18.300 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:15:18.300 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:18.300 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@433 -- # return 0 00:15:18.300 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:18.300 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:18.300 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:18.300 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:18.300 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:18.300 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:18.300 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:18.559 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:15:18.559 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:18.559 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:18.559 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:18.559 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=71304 00:15:18.559 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 71304 00:15:18.559 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:15:18.559 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 71304 ']' 00:15:18.559 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:18.559 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:18.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:18.559 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:18.559 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:18.559 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:18.559 [2024-07-25 08:57:25.545540] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:18.559 [2024-07-25 08:57:25.545728] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:18.817 [2024-07-25 08:57:25.725597] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:19.076 [2024-07-25 08:57:25.997560] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:19.076 [2024-07-25 08:57:25.997636] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:19.076 [2024-07-25 08:57:25.997655] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:19.076 [2024-07-25 08:57:25.997671] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:19.076 [2024-07-25 08:57:25.997687] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:19.076 [2024-07-25 08:57:25.998292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:15:19.076 [2024-07-25 08:57:25.998433] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:15:19.076 [2024-07-25 08:57:25.998521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:19.076 [2024-07-25 08:57:25.998539] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:15:19.334 [2024-07-25 08:57:26.206534] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:19.592 08:57:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:19.592 08:57:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:15:19.592 08:57:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:19.592 08:57:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:19.592 08:57:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:19.592 08:57:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:19.592 08:57:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:19.592 08:57:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.592 08:57:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:19.592 [2024-07-25 08:57:26.504554] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:19.592 08:57:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.592 08:57:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:19.592 08:57:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.592 08:57:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:19.592 Malloc0 00:15:19.592 08:57:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.592 08:57:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:19.592 08:57:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.592 08:57:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:19.592 08:57:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.592 08:57:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:19.592 08:57:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.592 08:57:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:19.592 08:57:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.592 08:57:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:19.592 08:57:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.592 08:57:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:19.592 [2024-07-25 08:57:26.613402] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:19.592 08:57:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.592 08:57:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:15:19.592 08:57:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:15:19.592 08:57:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:15:19.592 08:57:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:15:19.592 08:57:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:19.592 08:57:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:19.592 { 00:15:19.592 "params": { 00:15:19.592 "name": "Nvme$subsystem", 00:15:19.592 "trtype": "$TEST_TRANSPORT", 00:15:19.592 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:19.592 "adrfam": "ipv4", 00:15:19.592 "trsvcid": "$NVMF_PORT", 00:15:19.592 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:19.592 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:19.592 "hdgst": ${hdgst:-false}, 00:15:19.592 "ddgst": ${ddgst:-false} 00:15:19.592 }, 00:15:19.592 "method": "bdev_nvme_attach_controller" 00:15:19.592 } 00:15:19.592 EOF 00:15:19.592 )") 00:15:19.592 08:57:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:15:19.592 08:57:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:15:19.592 08:57:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:15:19.592 08:57:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:19.592 "params": { 00:15:19.592 "name": "Nvme1", 00:15:19.592 "trtype": "tcp", 00:15:19.592 "traddr": "10.0.0.2", 00:15:19.592 "adrfam": "ipv4", 00:15:19.592 "trsvcid": "4420", 00:15:19.592 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:19.592 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:19.592 "hdgst": false, 00:15:19.592 "ddgst": false 00:15:19.592 }, 00:15:19.592 "method": "bdev_nvme_attach_controller" 00:15:19.592 }' 00:15:19.850 [2024-07-25 08:57:26.709642] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:19.850 [2024-07-25 08:57:26.709791] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71340 ] 00:15:19.850 [2024-07-25 08:57:26.877862] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:20.108 [2024-07-25 08:57:27.150948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:20.108 [2024-07-25 08:57:27.151092] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:20.108 [2024-07-25 08:57:27.151108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:20.366 [2024-07-25 08:57:27.359995] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:20.624 I/O targets: 00:15:20.624 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:15:20.624 00:15:20.624 00:15:20.624 CUnit - A unit testing framework for C - Version 2.1-3 00:15:20.624 http://cunit.sourceforge.net/ 00:15:20.624 00:15:20.624 00:15:20.624 Suite: bdevio tests on: Nvme1n1 00:15:20.624 Test: blockdev write read block ...passed 00:15:20.624 Test: blockdev write zeroes read block ...passed 00:15:20.624 Test: blockdev write zeroes read no split ...passed 00:15:20.624 Test: blockdev write zeroes read split ...passed 00:15:20.624 Test: blockdev write zeroes read split partial ...passed 00:15:20.624 Test: blockdev reset ...[2024-07-25 08:57:27.629639] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:20.624 [2024-07-25 08:57:27.629834] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b280 (9): Bad file descriptor 00:15:20.624 [2024-07-25 08:57:27.650658] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:20.624 passed 00:15:20.624 Test: blockdev write read 8 blocks ...passed 00:15:20.624 Test: blockdev write read size > 128k ...passed 00:15:20.624 Test: blockdev write read invalid size ...passed 00:15:20.624 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:20.624 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:20.624 Test: blockdev write read max offset ...passed 00:15:20.624 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:20.624 Test: blockdev writev readv 8 blocks ...passed 00:15:20.624 Test: blockdev writev readv 30 x 1block ...passed 00:15:20.624 Test: blockdev writev readv block ...passed 00:15:20.624 Test: blockdev writev readv size > 128k ...passed 00:15:20.624 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:20.624 Test: blockdev comparev and writev ...[2024-07-25 08:57:27.662504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:20.624 [2024-07-25 08:57:27.662571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:20.624 [2024-07-25 08:57:27.662606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:20.624 [2024-07-25 08:57:27.662629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:20.624 [2024-07-25 08:57:27.663235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:20.624 [2024-07-25 08:57:27.663283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:20.624 [2024-07-25 08:57:27.663311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:20.624 [2024-07-25 08:57:27.663331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:20.624 [2024-07-25 08:57:27.663792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:20.624 [2024-07-25 08:57:27.663853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:20.624 [2024-07-25 08:57:27.663881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:20.624 [2024-07-25 08:57:27.663903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:20.624 [2024-07-25 08:57:27.664415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:20.624 [2024-07-25 08:57:27.664465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:20.624 [2024-07-25 08:57:27.664515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:20.624 [2024-07-25 08:57:27.664549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:20.624 passed 00:15:20.624 Test: blockdev nvme passthru rw ...passed 00:15:20.624 Test: blockdev nvme passthru vendor specific ...[2024-07-25 08:57:27.665663] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:20.624 [2024-07-25 08:57:27.665708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:20.624 [2024-07-25 08:57:27.665884] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:20.624 [2024-07-25 08:57:27.665914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:20.624 [2024-07-25 08:57:27.666057] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:20.624 [2024-07-25 08:57:27.666097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:20.624 [2024-07-25 08:57:27.666249] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:20.624 [2024-07-25 08:57:27.666288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:20.624 passed 00:15:20.624 Test: blockdev nvme admin passthru ...passed 00:15:20.624 Test: blockdev copy ...passed 00:15:20.624 00:15:20.624 Run Summary: Type Total Ran Passed Failed Inactive 00:15:20.624 suites 1 1 n/a 0 0 00:15:20.624 tests 23 23 23 0 0 00:15:20.624 asserts 152 152 152 0 n/a 00:15:20.624 00:15:20.624 Elapsed time = 0.296 seconds 00:15:21.997 08:57:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:21.997 08:57:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.997 08:57:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:21.997 08:57:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.997 08:57:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:15:21.997 08:57:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:15:21.997 08:57:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:21.997 08:57:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:15:21.997 08:57:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:21.997 08:57:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:15:21.997 08:57:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:21.997 08:57:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:21.997 rmmod nvme_tcp 00:15:21.997 rmmod nvme_fabrics 00:15:21.997 rmmod nvme_keyring 00:15:21.997 08:57:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:21.997 08:57:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:15:21.997 08:57:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:15:21.997 08:57:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 71304 ']' 00:15:21.997 08:57:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 71304 00:15:21.997 08:57:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 71304 ']' 00:15:21.997 08:57:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 71304 00:15:21.997 08:57:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:15:21.997 08:57:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:21.997 08:57:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71304 00:15:21.997 killing process with pid 71304 00:15:21.997 08:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:15:21.997 08:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:15:21.997 08:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71304' 00:15:21.997 08:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 71304 00:15:21.997 08:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 71304 00:15:23.370 08:57:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:23.370 08:57:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:23.370 08:57:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:23.370 08:57:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:23.370 08:57:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:23.370 08:57:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:23.370 08:57:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:23.370 08:57:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:23.370 08:57:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:23.370 00:15:23.370 real 0m5.427s 00:15:23.370 user 0m20.925s 00:15:23.370 sys 0m1.023s 00:15:23.370 08:57:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:23.370 08:57:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:23.370 ************************************ 00:15:23.370 END TEST nvmf_bdevio 00:15:23.370 ************************************ 00:15:23.370 08:57:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:15:23.370 00:15:23.370 real 2m58.542s 00:15:23.370 user 8m2.012s 00:15:23.370 sys 0m52.906s 00:15:23.370 08:57:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:23.370 08:57:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:15:23.370 ************************************ 00:15:23.370 END TEST nvmf_target_core 00:15:23.370 ************************************ 00:15:23.629 08:57:30 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:15:23.629 08:57:30 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:23.629 08:57:30 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:23.629 08:57:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:23.629 ************************************ 00:15:23.629 START TEST nvmf_target_extra 00:15:23.629 ************************************ 00:15:23.629 08:57:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:15:23.629 * Looking for test storage... 00:15:23.629 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:15:23.629 08:57:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:23.629 08:57:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:15:23.629 08:57:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:23.629 08:57:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:23.629 08:57:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:23.629 08:57:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:23.629 08:57:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:23.629 08:57:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:23.629 08:57:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:23.629 08:57:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:23.629 08:57:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:23.629 08:57:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:23.629 08:57:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:15:23.629 08:57:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:15:23.629 08:57:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:23.629 08:57:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:23.629 08:57:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:23.629 08:57:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:23.629 08:57:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:23.629 08:57:30 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:23.629 08:57:30 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:23.629 08:57:30 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:23.629 08:57:30 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.629 08:57:30 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.629 08:57:30 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.629 08:57:30 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:15:23.629 08:57:30 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.629 08:57:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # : 0 00:15:23.629 08:57:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:23.629 08:57:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:23.629 08:57:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:23.629 08:57:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:23.629 08:57:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:23.629 08:57:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:23.629 08:57:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:23.629 08:57:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:23.629 08:57:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:15:23.629 08:57:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:15:23.629 08:57:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:15:23.629 08:57:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:23.629 08:57:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:23.629 08:57:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:23.629 08:57:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:23.629 ************************************ 00:15:23.629 START TEST nvmf_auth_target 00:15:23.629 ************************************ 00:15:23.629 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:23.629 * Looking for test storage... 00:15:23.629 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:23.630 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:23.630 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:15:23.630 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:23.630 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:23.630 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:23.630 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:23.630 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:23.630 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:23.630 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:23.630 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:23.630 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:23.630 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:23.630 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:15:23.630 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:15:23.630 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:23.630 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:23.630 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:23.630 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:23.630 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:23.630 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:23.630 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:23.630 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:23.630 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.630 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.630 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.630 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:15:23.630 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.630 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:15:23.630 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:23.630 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:23.630 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:23.630 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:23.630 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:23.630 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:23.630 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:23.630 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:23.630 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:15:23.630 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:15:23.630 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:15:23.630 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:15:23.630 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:15:23.630 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:15:23.630 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:15:23.630 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:15:23.630 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:23.630 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:23.630 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:23.630 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:23.630 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:23.630 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:23.630 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:23.630 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:23.630 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:23.630 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:23.630 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:23.630 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:23.630 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:23.630 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:23.630 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:23.630 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:23.630 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:23.630 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:23.630 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:23.630 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:23.630 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:23.630 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:23.630 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:23.630 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:23.630 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:23.630 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:23.630 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:23.888 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:23.888 Cannot find device "nvmf_tgt_br" 00:15:23.888 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # true 00:15:23.888 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:23.888 Cannot find device "nvmf_tgt_br2" 00:15:23.888 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # true 00:15:23.888 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:23.888 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:23.888 Cannot find device "nvmf_tgt_br" 00:15:23.888 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # true 00:15:23.888 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:23.888 Cannot find device "nvmf_tgt_br2" 00:15:23.888 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # true 00:15:23.888 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:23.888 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:23.888 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:23.888 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:23.888 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:15:23.888 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:23.888 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:23.888 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:15:23.888 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:23.888 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:23.888 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:23.888 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:23.888 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:23.888 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:23.888 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:23.888 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:23.888 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:23.888 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:24.146 08:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:24.146 08:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:24.146 08:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:24.146 08:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:24.146 08:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:24.146 08:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:24.146 08:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:24.146 08:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:24.146 08:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:24.146 08:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:24.146 08:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:24.146 08:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:24.146 08:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:24.146 08:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:24.146 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:24.146 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.114 ms 00:15:24.146 00:15:24.146 --- 10.0.0.2 ping statistics --- 00:15:24.146 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:24.146 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:15:24.146 08:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:24.146 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:24.146 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:15:24.146 00:15:24.146 --- 10.0.0.3 ping statistics --- 00:15:24.146 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:24.146 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:15:24.146 08:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:24.146 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:24.146 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:15:24.146 00:15:24.146 --- 10.0.0.1 ping statistics --- 00:15:24.146 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:24.146 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:15:24.146 08:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:24.146 08:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@433 -- # return 0 00:15:24.146 08:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:24.146 08:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:24.146 08:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:24.146 08:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:24.146 08:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:24.146 08:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:24.146 08:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:24.146 08:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:15:24.146 08:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:24.146 08:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:24.146 08:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.146 08:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=71611 00:15:24.146 08:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:15:24.146 08:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 71611 00:15:24.146 08:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 71611 ']' 00:15:24.146 08:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:24.146 08:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:24.146 08:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:24.146 08:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:24.146 08:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.079 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:25.079 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:15:25.079 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:25.079 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:25.079 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.079 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:25.079 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=71643 00:15:25.079 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:15:25.079 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:25.079 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:15:25.079 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:25.079 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:25.079 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:25.079 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:15:25.079 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:15:25.079 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:25.079 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=29f7118185762cfae4f600b6f08a7224c4af8cf128dd9512 00:15:25.079 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:15:25.079 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.kiq 00:15:25.079 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 29f7118185762cfae4f600b6f08a7224c4af8cf128dd9512 0 00:15:25.079 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 29f7118185762cfae4f600b6f08a7224c4af8cf128dd9512 0 00:15:25.079 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:25.079 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:25.079 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=29f7118185762cfae4f600b6f08a7224c4af8cf128dd9512 00:15:25.079 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:15:25.079 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:25.338 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.kiq 00:15:25.338 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.kiq 00:15:25.338 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.kiq 00:15:25.338 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:15:25.338 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:25.338 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:25.338 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:25.338 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:15:25.338 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:15:25.338 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:25.338 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=5a5393804496a8c59d46160ae9b09d3b616761f841059058f1a43178805e9fcd 00:15:25.338 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:15:25.338 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.yp5 00:15:25.338 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 5a5393804496a8c59d46160ae9b09d3b616761f841059058f1a43178805e9fcd 3 00:15:25.338 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 5a5393804496a8c59d46160ae9b09d3b616761f841059058f1a43178805e9fcd 3 00:15:25.338 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:25.338 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:25.338 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=5a5393804496a8c59d46160ae9b09d3b616761f841059058f1a43178805e9fcd 00:15:25.338 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:15:25.338 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:25.338 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.yp5 00:15:25.338 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.yp5 00:15:25.338 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.yp5 00:15:25.338 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:15:25.338 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:25.338 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:25.339 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:25.339 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:15:25.339 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:15:25.339 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:25.339 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=744721bb404cb25233c2e2db5aed1c65 00:15:25.339 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:15:25.339 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Tah 00:15:25.339 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 744721bb404cb25233c2e2db5aed1c65 1 00:15:25.339 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 744721bb404cb25233c2e2db5aed1c65 1 00:15:25.339 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:25.339 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:25.339 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=744721bb404cb25233c2e2db5aed1c65 00:15:25.339 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:15:25.339 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:25.339 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Tah 00:15:25.339 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Tah 00:15:25.339 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.Tah 00:15:25.339 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:15:25.339 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:25.339 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:25.339 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:25.339 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:15:25.339 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:15:25.339 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:25.339 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=2ef2e2f71bc267e4bb584b76e391dd4d1e45cc35406eb8d9 00:15:25.339 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:15:25.339 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.rPT 00:15:25.339 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 2ef2e2f71bc267e4bb584b76e391dd4d1e45cc35406eb8d9 2 00:15:25.339 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 2ef2e2f71bc267e4bb584b76e391dd4d1e45cc35406eb8d9 2 00:15:25.339 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:25.339 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:25.339 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=2ef2e2f71bc267e4bb584b76e391dd4d1e45cc35406eb8d9 00:15:25.339 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:15:25.339 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:25.339 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.rPT 00:15:25.339 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.rPT 00:15:25.339 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.rPT 00:15:25.339 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:15:25.339 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:25.339 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:25.339 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:25.339 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:15:25.339 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:15:25.339 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:25.339 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=bfe804c7c0b5f80282d07dbeac72027df06f2bb923ce72f0 00:15:25.339 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:15:25.339 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.ql9 00:15:25.339 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key bfe804c7c0b5f80282d07dbeac72027df06f2bb923ce72f0 2 00:15:25.339 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 bfe804c7c0b5f80282d07dbeac72027df06f2bb923ce72f0 2 00:15:25.339 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:25.339 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:25.339 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=bfe804c7c0b5f80282d07dbeac72027df06f2bb923ce72f0 00:15:25.339 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:15:25.339 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:25.598 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.ql9 00:15:25.598 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.ql9 00:15:25.598 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.ql9 00:15:25.598 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:15:25.598 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:25.598 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:25.598 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:25.598 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:15:25.598 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:15:25.598 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:25.598 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=ab9f8e6066110f7e9687aa1714d89612 00:15:25.598 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:15:25.598 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.8PC 00:15:25.598 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key ab9f8e6066110f7e9687aa1714d89612 1 00:15:25.598 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 ab9f8e6066110f7e9687aa1714d89612 1 00:15:25.598 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:25.598 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:25.598 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=ab9f8e6066110f7e9687aa1714d89612 00:15:25.598 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:15:25.598 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:25.598 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.8PC 00:15:25.598 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.8PC 00:15:25.598 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.8PC 00:15:25.598 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:15:25.598 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:25.598 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:25.598 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:25.598 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:15:25.598 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:15:25.598 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:25.598 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b9bc768da3f5b283028b665f8ddfd72d564aa1345f696ebd9f33998163dd9252 00:15:25.598 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:15:25.598 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.qZq 00:15:25.598 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b9bc768da3f5b283028b665f8ddfd72d564aa1345f696ebd9f33998163dd9252 3 00:15:25.598 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b9bc768da3f5b283028b665f8ddfd72d564aa1345f696ebd9f33998163dd9252 3 00:15:25.598 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:25.598 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:25.598 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b9bc768da3f5b283028b665f8ddfd72d564aa1345f696ebd9f33998163dd9252 00:15:25.598 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:15:25.598 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:25.598 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.qZq 00:15:25.598 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.qZq 00:15:25.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:25.598 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.qZq 00:15:25.598 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:15:25.598 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 71611 00:15:25.598 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 71611 ']' 00:15:25.598 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:25.598 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:25.598 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:25.598 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:25.598 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:25.856 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:25.856 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:15:25.856 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 71643 /var/tmp/host.sock 00:15:25.856 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 71643 ']' 00:15:25.856 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:15:25.856 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:25.856 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:25.856 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:25.856 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.789 08:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:26.789 08:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:15:26.789 08:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:15:26.789 08:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.789 08:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.789 08:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.789 08:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:15:26.789 08:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.kiq 00:15:26.789 08:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.789 08:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.789 08:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.789 08:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.kiq 00:15:26.789 08:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.kiq 00:15:27.046 08:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.yp5 ]] 00:15:27.046 08:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.yp5 00:15:27.046 08:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.046 08:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.046 08:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.046 08:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.yp5 00:15:27.046 08:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.yp5 00:15:27.303 08:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:15:27.303 08:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Tah 00:15:27.303 08:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.303 08:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.303 08:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.303 08:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.Tah 00:15:27.303 08:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.Tah 00:15:27.560 08:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.rPT ]] 00:15:27.560 08:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.rPT 00:15:27.560 08:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.560 08:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.560 08:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.560 08:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.rPT 00:15:27.560 08:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.rPT 00:15:27.817 08:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:15:27.817 08:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.ql9 00:15:27.817 08:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.817 08:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.817 08:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.817 08:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.ql9 00:15:27.817 08:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.ql9 00:15:28.075 08:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.8PC ]] 00:15:28.075 08:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.8PC 00:15:28.075 08:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.075 08:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.075 08:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.075 08:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.8PC 00:15:28.075 08:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.8PC 00:15:28.332 08:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:15:28.332 08:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.qZq 00:15:28.332 08:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.332 08:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.332 08:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.332 08:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.qZq 00:15:28.332 08:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.qZq 00:15:28.589 08:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:15:28.589 08:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:15:28.589 08:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:28.589 08:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:28.589 08:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:28.590 08:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:28.847 08:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:15:28.847 08:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:28.847 08:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:28.847 08:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:28.847 08:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:28.847 08:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:28.847 08:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:28.847 08:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.847 08:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.847 08:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.847 08:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:28.847 08:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:29.105 00:15:29.105 08:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:29.105 08:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:29.105 08:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:29.362 08:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:29.362 08:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:29.362 08:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.362 08:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.362 08:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.362 08:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:29.362 { 00:15:29.362 "cntlid": 1, 00:15:29.362 "qid": 0, 00:15:29.362 "state": "enabled", 00:15:29.362 "thread": "nvmf_tgt_poll_group_000", 00:15:29.362 "listen_address": { 00:15:29.362 "trtype": "TCP", 00:15:29.362 "adrfam": "IPv4", 00:15:29.362 "traddr": "10.0.0.2", 00:15:29.362 "trsvcid": "4420" 00:15:29.362 }, 00:15:29.362 "peer_address": { 00:15:29.362 "trtype": "TCP", 00:15:29.362 "adrfam": "IPv4", 00:15:29.362 "traddr": "10.0.0.1", 00:15:29.362 "trsvcid": "60018" 00:15:29.362 }, 00:15:29.362 "auth": { 00:15:29.362 "state": "completed", 00:15:29.362 "digest": "sha256", 00:15:29.362 "dhgroup": "null" 00:15:29.362 } 00:15:29.362 } 00:15:29.362 ]' 00:15:29.363 08:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:29.620 08:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:29.620 08:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:29.620 08:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:29.620 08:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:29.620 08:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:29.620 08:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:29.620 08:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:29.878 08:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-secret DHHC-1:00:MjlmNzExODE4NTc2MmNmYWU0ZjYwMGI2ZjA4YTcyMjRjNGFmOGNmMTI4ZGQ5NTEyJWIzmQ==: --dhchap-ctrl-secret DHHC-1:03:NWE1MzkzODA0NDk2YThjNTlkNDYxNjBhZTliMDlkM2I2MTY3NjFmODQxMDU5MDU4ZjFhNDMxNzg4MDVlOWZjZEEK018=: 00:15:35.140 08:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:35.140 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:35.140 08:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:15:35.140 08:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.140 08:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.140 08:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.140 08:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:35.140 08:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:35.140 08:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:35.140 08:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:15:35.140 08:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:35.140 08:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:35.140 08:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:35.140 08:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:35.140 08:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:35.140 08:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:35.140 08:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.140 08:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.140 08:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.140 08:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:35.140 08:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:35.140 00:15:35.140 08:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:35.140 08:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:35.140 08:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:35.140 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:35.140 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:35.140 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.140 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.140 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.140 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:35.140 { 00:15:35.140 "cntlid": 3, 00:15:35.140 "qid": 0, 00:15:35.140 "state": "enabled", 00:15:35.140 "thread": "nvmf_tgt_poll_group_000", 00:15:35.140 "listen_address": { 00:15:35.140 "trtype": "TCP", 00:15:35.140 "adrfam": "IPv4", 00:15:35.140 "traddr": "10.0.0.2", 00:15:35.140 "trsvcid": "4420" 00:15:35.140 }, 00:15:35.140 "peer_address": { 00:15:35.140 "trtype": "TCP", 00:15:35.140 "adrfam": "IPv4", 00:15:35.140 "traddr": "10.0.0.1", 00:15:35.140 "trsvcid": "60038" 00:15:35.140 }, 00:15:35.140 "auth": { 00:15:35.140 "state": "completed", 00:15:35.140 "digest": "sha256", 00:15:35.140 "dhgroup": "null" 00:15:35.140 } 00:15:35.140 } 00:15:35.140 ]' 00:15:35.140 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:35.140 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:35.140 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:35.140 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:35.141 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:35.398 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:35.398 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:35.398 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:35.656 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-secret DHHC-1:01:NzQ0NzIxYmI0MDRjYjI1MjMzYzJlMmRiNWFlZDFjNjXiFtJQ: --dhchap-ctrl-secret DHHC-1:02:MmVmMmUyZjcxYmMyNjdlNGJiNTg0Yjc2ZTM5MWRkNGQxZTQ1Y2MzNTQwNmViOGQ58n44LA==: 00:15:36.222 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:36.222 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:36.222 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:15:36.222 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.222 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.222 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.222 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:36.222 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:36.222 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:36.479 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:15:36.479 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:36.479 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:36.479 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:36.479 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:36.479 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:36.479 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:36.479 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.479 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.479 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.479 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:36.479 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:36.737 00:15:36.995 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:36.995 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:36.995 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:37.253 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:37.253 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:37.253 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.253 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.253 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.253 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:37.253 { 00:15:37.253 "cntlid": 5, 00:15:37.253 "qid": 0, 00:15:37.253 "state": "enabled", 00:15:37.253 "thread": "nvmf_tgt_poll_group_000", 00:15:37.253 "listen_address": { 00:15:37.253 "trtype": "TCP", 00:15:37.253 "adrfam": "IPv4", 00:15:37.253 "traddr": "10.0.0.2", 00:15:37.253 "trsvcid": "4420" 00:15:37.253 }, 00:15:37.253 "peer_address": { 00:15:37.253 "trtype": "TCP", 00:15:37.253 "adrfam": "IPv4", 00:15:37.253 "traddr": "10.0.0.1", 00:15:37.253 "trsvcid": "39194" 00:15:37.253 }, 00:15:37.253 "auth": { 00:15:37.253 "state": "completed", 00:15:37.253 "digest": "sha256", 00:15:37.253 "dhgroup": "null" 00:15:37.253 } 00:15:37.253 } 00:15:37.253 ]' 00:15:37.253 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:37.254 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:37.254 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:37.254 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:37.254 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:37.254 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:37.254 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:37.254 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:37.821 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-secret DHHC-1:02:YmZlODA0YzdjMGI1ZjgwMjgyZDA3ZGJlYWM3MjAyN2RmMDZmMmJiOTIzY2U3MmYw8KNWIw==: --dhchap-ctrl-secret DHHC-1:01:YWI5ZjhlNjA2NjExMGY3ZTk2ODdhYTE3MTRkODk2MTL2aiAd: 00:15:38.386 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:38.386 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:38.386 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:15:38.386 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.386 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.386 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.386 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:38.386 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:38.386 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:38.644 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:15:38.644 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:38.644 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:38.644 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:38.644 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:38.644 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:38.644 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-key key3 00:15:38.644 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.644 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.644 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.644 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:38.644 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:38.902 00:15:38.902 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:38.902 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:38.902 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:39.159 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.159 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:39.159 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.159 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.159 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.159 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:39.159 { 00:15:39.159 "cntlid": 7, 00:15:39.159 "qid": 0, 00:15:39.159 "state": "enabled", 00:15:39.159 "thread": "nvmf_tgt_poll_group_000", 00:15:39.159 "listen_address": { 00:15:39.159 "trtype": "TCP", 00:15:39.159 "adrfam": "IPv4", 00:15:39.159 "traddr": "10.0.0.2", 00:15:39.159 "trsvcid": "4420" 00:15:39.159 }, 00:15:39.159 "peer_address": { 00:15:39.159 "trtype": "TCP", 00:15:39.159 "adrfam": "IPv4", 00:15:39.159 "traddr": "10.0.0.1", 00:15:39.159 "trsvcid": "39212" 00:15:39.159 }, 00:15:39.159 "auth": { 00:15:39.159 "state": "completed", 00:15:39.159 "digest": "sha256", 00:15:39.160 "dhgroup": "null" 00:15:39.160 } 00:15:39.160 } 00:15:39.160 ]' 00:15:39.160 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:39.160 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:39.160 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:39.160 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:39.160 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:39.417 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:39.417 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:39.417 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:39.674 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-secret DHHC-1:03:YjliYzc2OGRhM2Y1YjI4MzAyOGI2NjVmOGRkZmQ3MmQ1NjRhYTEzNDVmNjk2ZWJkOWYzMzk5ODE2M2RkOTI1MlpGH1c=: 00:15:40.240 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:40.240 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:40.240 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:15:40.240 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.240 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.240 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.240 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:40.240 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:40.240 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:40.240 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:40.498 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:15:40.498 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:40.498 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:40.498 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:40.498 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:40.498 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:40.498 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:40.498 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.498 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.498 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.498 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:40.498 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:41.063 00:15:41.063 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:41.063 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:41.063 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:41.320 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:41.320 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:41.320 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.320 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.320 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.320 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:41.320 { 00:15:41.320 "cntlid": 9, 00:15:41.320 "qid": 0, 00:15:41.320 "state": "enabled", 00:15:41.320 "thread": "nvmf_tgt_poll_group_000", 00:15:41.320 "listen_address": { 00:15:41.320 "trtype": "TCP", 00:15:41.320 "adrfam": "IPv4", 00:15:41.320 "traddr": "10.0.0.2", 00:15:41.320 "trsvcid": "4420" 00:15:41.320 }, 00:15:41.320 "peer_address": { 00:15:41.320 "trtype": "TCP", 00:15:41.320 "adrfam": "IPv4", 00:15:41.320 "traddr": "10.0.0.1", 00:15:41.320 "trsvcid": "39244" 00:15:41.320 }, 00:15:41.320 "auth": { 00:15:41.320 "state": "completed", 00:15:41.320 "digest": "sha256", 00:15:41.320 "dhgroup": "ffdhe2048" 00:15:41.320 } 00:15:41.320 } 00:15:41.320 ]' 00:15:41.320 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:41.320 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:41.320 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:41.320 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:41.320 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:41.320 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:41.320 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:41.320 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:41.577 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-secret DHHC-1:00:MjlmNzExODE4NTc2MmNmYWU0ZjYwMGI2ZjA4YTcyMjRjNGFmOGNmMTI4ZGQ5NTEyJWIzmQ==: --dhchap-ctrl-secret DHHC-1:03:NWE1MzkzODA0NDk2YThjNTlkNDYxNjBhZTliMDlkM2I2MTY3NjFmODQxMDU5MDU4ZjFhNDMxNzg4MDVlOWZjZEEK018=: 00:15:42.510 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:42.510 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:42.510 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:15:42.510 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.510 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.510 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.510 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:42.510 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:42.510 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:42.510 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:15:42.510 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:42.510 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:42.510 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:42.510 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:42.510 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:42.510 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:42.510 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.510 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.510 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.510 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:42.510 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:43.077 00:15:43.077 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:43.077 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:43.077 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:43.335 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:43.335 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:43.335 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.335 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.335 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.335 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:43.335 { 00:15:43.335 "cntlid": 11, 00:15:43.335 "qid": 0, 00:15:43.335 "state": "enabled", 00:15:43.335 "thread": "nvmf_tgt_poll_group_000", 00:15:43.335 "listen_address": { 00:15:43.335 "trtype": "TCP", 00:15:43.335 "adrfam": "IPv4", 00:15:43.335 "traddr": "10.0.0.2", 00:15:43.335 "trsvcid": "4420" 00:15:43.335 }, 00:15:43.335 "peer_address": { 00:15:43.335 "trtype": "TCP", 00:15:43.335 "adrfam": "IPv4", 00:15:43.335 "traddr": "10.0.0.1", 00:15:43.335 "trsvcid": "39280" 00:15:43.335 }, 00:15:43.335 "auth": { 00:15:43.335 "state": "completed", 00:15:43.335 "digest": "sha256", 00:15:43.335 "dhgroup": "ffdhe2048" 00:15:43.335 } 00:15:43.335 } 00:15:43.335 ]' 00:15:43.335 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:43.335 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:43.335 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:43.335 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:43.335 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:43.335 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:43.335 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:43.335 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:43.593 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-secret DHHC-1:01:NzQ0NzIxYmI0MDRjYjI1MjMzYzJlMmRiNWFlZDFjNjXiFtJQ: --dhchap-ctrl-secret DHHC-1:02:MmVmMmUyZjcxYmMyNjdlNGJiNTg0Yjc2ZTM5MWRkNGQxZTQ1Y2MzNTQwNmViOGQ58n44LA==: 00:15:44.527 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:44.527 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:44.527 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:15:44.527 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.527 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.527 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.527 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:44.527 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:44.527 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:44.785 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:15:44.785 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:44.785 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:44.785 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:44.785 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:44.785 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:44.785 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:44.785 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.785 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.785 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.785 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:44.785 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:45.043 00:15:45.043 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:45.043 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:45.043 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:45.301 08:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:45.301 08:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:45.301 08:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.301 08:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.301 08:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.301 08:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:45.301 { 00:15:45.301 "cntlid": 13, 00:15:45.301 "qid": 0, 00:15:45.301 "state": "enabled", 00:15:45.301 "thread": "nvmf_tgt_poll_group_000", 00:15:45.301 "listen_address": { 00:15:45.301 "trtype": "TCP", 00:15:45.301 "adrfam": "IPv4", 00:15:45.301 "traddr": "10.0.0.2", 00:15:45.301 "trsvcid": "4420" 00:15:45.301 }, 00:15:45.301 "peer_address": { 00:15:45.301 "trtype": "TCP", 00:15:45.301 "adrfam": "IPv4", 00:15:45.301 "traddr": "10.0.0.1", 00:15:45.301 "trsvcid": "39318" 00:15:45.301 }, 00:15:45.301 "auth": { 00:15:45.301 "state": "completed", 00:15:45.301 "digest": "sha256", 00:15:45.301 "dhgroup": "ffdhe2048" 00:15:45.301 } 00:15:45.301 } 00:15:45.301 ]' 00:15:45.301 08:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:45.301 08:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:45.301 08:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:45.301 08:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:45.301 08:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:45.561 08:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:45.561 08:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:45.561 08:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:45.561 08:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-secret DHHC-1:02:YmZlODA0YzdjMGI1ZjgwMjgyZDA3ZGJlYWM3MjAyN2RmMDZmMmJiOTIzY2U3MmYw8KNWIw==: --dhchap-ctrl-secret DHHC-1:01:YWI5ZjhlNjA2NjExMGY3ZTk2ODdhYTE3MTRkODk2MTL2aiAd: 00:15:46.498 08:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:46.498 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:46.498 08:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:15:46.498 08:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.498 08:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.498 08:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.498 08:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:46.498 08:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:46.498 08:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:46.757 08:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:15:46.757 08:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:46.757 08:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:46.757 08:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:46.757 08:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:46.757 08:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:46.757 08:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-key key3 00:15:46.757 08:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.757 08:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.757 08:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.757 08:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:46.757 08:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:47.015 00:15:47.015 08:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:47.015 08:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:47.015 08:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:47.273 08:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.273 08:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:47.273 08:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.273 08:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.273 08:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.273 08:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:47.273 { 00:15:47.273 "cntlid": 15, 00:15:47.273 "qid": 0, 00:15:47.273 "state": "enabled", 00:15:47.273 "thread": "nvmf_tgt_poll_group_000", 00:15:47.273 "listen_address": { 00:15:47.273 "trtype": "TCP", 00:15:47.273 "adrfam": "IPv4", 00:15:47.273 "traddr": "10.0.0.2", 00:15:47.273 "trsvcid": "4420" 00:15:47.273 }, 00:15:47.273 "peer_address": { 00:15:47.273 "trtype": "TCP", 00:15:47.273 "adrfam": "IPv4", 00:15:47.273 "traddr": "10.0.0.1", 00:15:47.273 "trsvcid": "38634" 00:15:47.273 }, 00:15:47.273 "auth": { 00:15:47.273 "state": "completed", 00:15:47.273 "digest": "sha256", 00:15:47.273 "dhgroup": "ffdhe2048" 00:15:47.273 } 00:15:47.273 } 00:15:47.273 ]' 00:15:47.273 08:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:47.273 08:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:47.273 08:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:47.273 08:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:47.273 08:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:47.273 08:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:47.273 08:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:47.273 08:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:47.839 08:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-secret DHHC-1:03:YjliYzc2OGRhM2Y1YjI4MzAyOGI2NjVmOGRkZmQ3MmQ1NjRhYTEzNDVmNjk2ZWJkOWYzMzk5ODE2M2RkOTI1MlpGH1c=: 00:15:48.405 08:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:48.405 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:48.405 08:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:15:48.405 08:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.405 08:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.405 08:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.405 08:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:48.405 08:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:48.405 08:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:48.405 08:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:48.664 08:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:15:48.664 08:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:48.664 08:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:48.664 08:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:48.664 08:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:48.664 08:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:48.664 08:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:48.664 08:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.664 08:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.664 08:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.664 08:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:48.664 08:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:48.933 00:15:48.933 08:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:48.933 08:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:48.933 08:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:49.219 08:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.219 08:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:49.219 08:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.219 08:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.219 08:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.219 08:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:49.219 { 00:15:49.219 "cntlid": 17, 00:15:49.219 "qid": 0, 00:15:49.219 "state": "enabled", 00:15:49.219 "thread": "nvmf_tgt_poll_group_000", 00:15:49.219 "listen_address": { 00:15:49.219 "trtype": "TCP", 00:15:49.219 "adrfam": "IPv4", 00:15:49.219 "traddr": "10.0.0.2", 00:15:49.219 "trsvcid": "4420" 00:15:49.219 }, 00:15:49.219 "peer_address": { 00:15:49.219 "trtype": "TCP", 00:15:49.219 "adrfam": "IPv4", 00:15:49.219 "traddr": "10.0.0.1", 00:15:49.219 "trsvcid": "38668" 00:15:49.219 }, 00:15:49.219 "auth": { 00:15:49.219 "state": "completed", 00:15:49.219 "digest": "sha256", 00:15:49.219 "dhgroup": "ffdhe3072" 00:15:49.219 } 00:15:49.219 } 00:15:49.219 ]' 00:15:49.219 08:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:49.477 08:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:49.477 08:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:49.477 08:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:49.477 08:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:49.477 08:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:49.477 08:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:49.477 08:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:49.734 08:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-secret DHHC-1:00:MjlmNzExODE4NTc2MmNmYWU0ZjYwMGI2ZjA4YTcyMjRjNGFmOGNmMTI4ZGQ5NTEyJWIzmQ==: --dhchap-ctrl-secret DHHC-1:03:NWE1MzkzODA0NDk2YThjNTlkNDYxNjBhZTliMDlkM2I2MTY3NjFmODQxMDU5MDU4ZjFhNDMxNzg4MDVlOWZjZEEK018=: 00:15:50.300 08:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:50.300 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:50.300 08:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:15:50.300 08:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.300 08:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.300 08:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.300 08:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:50.300 08:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:50.300 08:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:50.557 08:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:15:50.557 08:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:50.557 08:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:50.557 08:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:50.557 08:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:50.557 08:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:50.557 08:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:50.557 08:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.557 08:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.557 08:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.557 08:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:50.557 08:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.125 00:15:51.125 08:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:51.125 08:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:51.125 08:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:51.383 08:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:51.383 08:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:51.383 08:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.383 08:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.383 08:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.383 08:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:51.383 { 00:15:51.383 "cntlid": 19, 00:15:51.383 "qid": 0, 00:15:51.383 "state": "enabled", 00:15:51.383 "thread": "nvmf_tgt_poll_group_000", 00:15:51.383 "listen_address": { 00:15:51.383 "trtype": "TCP", 00:15:51.383 "adrfam": "IPv4", 00:15:51.383 "traddr": "10.0.0.2", 00:15:51.383 "trsvcid": "4420" 00:15:51.383 }, 00:15:51.383 "peer_address": { 00:15:51.383 "trtype": "TCP", 00:15:51.383 "adrfam": "IPv4", 00:15:51.383 "traddr": "10.0.0.1", 00:15:51.383 "trsvcid": "38686" 00:15:51.383 }, 00:15:51.383 "auth": { 00:15:51.383 "state": "completed", 00:15:51.383 "digest": "sha256", 00:15:51.383 "dhgroup": "ffdhe3072" 00:15:51.383 } 00:15:51.383 } 00:15:51.383 ]' 00:15:51.383 08:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:51.383 08:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:51.383 08:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:51.383 08:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:51.383 08:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:51.641 08:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:51.641 08:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:51.641 08:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:51.900 08:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-secret DHHC-1:01:NzQ0NzIxYmI0MDRjYjI1MjMzYzJlMmRiNWFlZDFjNjXiFtJQ: --dhchap-ctrl-secret DHHC-1:02:MmVmMmUyZjcxYmMyNjdlNGJiNTg0Yjc2ZTM5MWRkNGQxZTQ1Y2MzNTQwNmViOGQ58n44LA==: 00:15:52.466 08:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:52.466 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:52.466 08:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:15:52.466 08:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.466 08:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.466 08:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.466 08:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:52.467 08:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:52.467 08:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:52.725 08:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:15:52.725 08:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:52.725 08:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:52.725 08:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:52.725 08:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:52.725 08:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:52.725 08:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:52.725 08:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.725 08:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.725 08:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.725 08:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:52.725 08:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:53.008 00:15:53.008 08:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:53.008 08:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:53.008 08:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:53.289 08:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.289 08:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:53.289 08:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.289 08:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.289 08:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.289 08:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:53.289 { 00:15:53.289 "cntlid": 21, 00:15:53.289 "qid": 0, 00:15:53.289 "state": "enabled", 00:15:53.289 "thread": "nvmf_tgt_poll_group_000", 00:15:53.289 "listen_address": { 00:15:53.289 "trtype": "TCP", 00:15:53.289 "adrfam": "IPv4", 00:15:53.289 "traddr": "10.0.0.2", 00:15:53.289 "trsvcid": "4420" 00:15:53.289 }, 00:15:53.289 "peer_address": { 00:15:53.289 "trtype": "TCP", 00:15:53.289 "adrfam": "IPv4", 00:15:53.289 "traddr": "10.0.0.1", 00:15:53.289 "trsvcid": "38714" 00:15:53.289 }, 00:15:53.289 "auth": { 00:15:53.289 "state": "completed", 00:15:53.289 "digest": "sha256", 00:15:53.289 "dhgroup": "ffdhe3072" 00:15:53.289 } 00:15:53.289 } 00:15:53.289 ]' 00:15:53.289 08:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:53.547 08:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:53.547 08:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:53.547 08:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:53.547 08:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:53.547 08:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:53.547 08:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:53.547 08:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:53.805 08:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-secret DHHC-1:02:YmZlODA0YzdjMGI1ZjgwMjgyZDA3ZGJlYWM3MjAyN2RmMDZmMmJiOTIzY2U3MmYw8KNWIw==: --dhchap-ctrl-secret DHHC-1:01:YWI5ZjhlNjA2NjExMGY3ZTk2ODdhYTE3MTRkODk2MTL2aiAd: 00:15:54.370 08:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:54.370 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:54.370 08:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:15:54.370 08:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.370 08:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.370 08:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.370 08:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:54.371 08:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:54.371 08:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:54.628 08:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:15:54.628 08:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:54.628 08:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:54.628 08:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:54.628 08:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:54.628 08:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:54.628 08:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-key key3 00:15:54.628 08:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.628 08:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.628 08:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.628 08:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:54.628 08:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:55.193 00:15:55.193 08:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:55.193 08:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:55.193 08:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:55.451 08:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.451 08:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:55.451 08:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.451 08:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.451 08:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.451 08:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:55.451 { 00:15:55.451 "cntlid": 23, 00:15:55.451 "qid": 0, 00:15:55.451 "state": "enabled", 00:15:55.451 "thread": "nvmf_tgt_poll_group_000", 00:15:55.451 "listen_address": { 00:15:55.451 "trtype": "TCP", 00:15:55.451 "adrfam": "IPv4", 00:15:55.451 "traddr": "10.0.0.2", 00:15:55.451 "trsvcid": "4420" 00:15:55.451 }, 00:15:55.451 "peer_address": { 00:15:55.451 "trtype": "TCP", 00:15:55.451 "adrfam": "IPv4", 00:15:55.451 "traddr": "10.0.0.1", 00:15:55.451 "trsvcid": "38750" 00:15:55.451 }, 00:15:55.451 "auth": { 00:15:55.451 "state": "completed", 00:15:55.451 "digest": "sha256", 00:15:55.451 "dhgroup": "ffdhe3072" 00:15:55.451 } 00:15:55.451 } 00:15:55.451 ]' 00:15:55.451 08:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:55.451 08:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:55.451 08:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:55.451 08:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:55.451 08:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:55.451 08:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:55.451 08:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:55.451 08:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:55.709 08:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-secret DHHC-1:03:YjliYzc2OGRhM2Y1YjI4MzAyOGI2NjVmOGRkZmQ3MmQ1NjRhYTEzNDVmNjk2ZWJkOWYzMzk5ODE2M2RkOTI1MlpGH1c=: 00:15:56.643 08:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:56.643 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:56.643 08:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:15:56.643 08:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.643 08:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.643 08:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.643 08:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:56.643 08:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:56.643 08:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:56.643 08:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:56.643 08:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:15:56.643 08:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:56.643 08:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:56.643 08:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:56.643 08:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:56.643 08:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:56.643 08:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:56.643 08:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.643 08:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.643 08:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.643 08:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:56.643 08:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:57.209 00:15:57.209 08:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:57.209 08:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:57.209 08:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:57.468 08:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:57.468 08:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:57.468 08:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.468 08:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.468 08:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.468 08:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:57.468 { 00:15:57.468 "cntlid": 25, 00:15:57.468 "qid": 0, 00:15:57.468 "state": "enabled", 00:15:57.468 "thread": "nvmf_tgt_poll_group_000", 00:15:57.468 "listen_address": { 00:15:57.468 "trtype": "TCP", 00:15:57.468 "adrfam": "IPv4", 00:15:57.468 "traddr": "10.0.0.2", 00:15:57.468 "trsvcid": "4420" 00:15:57.468 }, 00:15:57.468 "peer_address": { 00:15:57.468 "trtype": "TCP", 00:15:57.468 "adrfam": "IPv4", 00:15:57.468 "traddr": "10.0.0.1", 00:15:57.468 "trsvcid": "43378" 00:15:57.468 }, 00:15:57.468 "auth": { 00:15:57.468 "state": "completed", 00:15:57.468 "digest": "sha256", 00:15:57.468 "dhgroup": "ffdhe4096" 00:15:57.468 } 00:15:57.468 } 00:15:57.468 ]' 00:15:57.468 08:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:57.468 08:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:57.468 08:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:57.726 08:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:57.726 08:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:57.726 08:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:57.726 08:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:57.726 08:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:57.985 08:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-secret DHHC-1:00:MjlmNzExODE4NTc2MmNmYWU0ZjYwMGI2ZjA4YTcyMjRjNGFmOGNmMTI4ZGQ5NTEyJWIzmQ==: --dhchap-ctrl-secret DHHC-1:03:NWE1MzkzODA0NDk2YThjNTlkNDYxNjBhZTliMDlkM2I2MTY3NjFmODQxMDU5MDU4ZjFhNDMxNzg4MDVlOWZjZEEK018=: 00:15:58.552 08:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:58.552 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:58.552 08:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:15:58.552 08:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.552 08:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.552 08:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.552 08:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:58.552 08:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:58.552 08:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:59.118 08:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:15:59.118 08:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:59.118 08:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:59.118 08:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:59.118 08:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:59.118 08:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:59.118 08:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:59.118 08:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.118 08:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.118 08:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.118 08:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:59.118 08:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:59.376 00:15:59.634 08:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:59.634 08:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:59.634 08:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:59.634 08:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:59.634 08:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:59.634 08:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.634 08:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.893 08:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.893 08:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:59.893 { 00:15:59.893 "cntlid": 27, 00:15:59.893 "qid": 0, 00:15:59.893 "state": "enabled", 00:15:59.893 "thread": "nvmf_tgt_poll_group_000", 00:15:59.893 "listen_address": { 00:15:59.893 "trtype": "TCP", 00:15:59.893 "adrfam": "IPv4", 00:15:59.893 "traddr": "10.0.0.2", 00:15:59.893 "trsvcid": "4420" 00:15:59.893 }, 00:15:59.893 "peer_address": { 00:15:59.893 "trtype": "TCP", 00:15:59.893 "adrfam": "IPv4", 00:15:59.893 "traddr": "10.0.0.1", 00:15:59.893 "trsvcid": "43396" 00:15:59.893 }, 00:15:59.893 "auth": { 00:15:59.893 "state": "completed", 00:15:59.893 "digest": "sha256", 00:15:59.893 "dhgroup": "ffdhe4096" 00:15:59.893 } 00:15:59.893 } 00:15:59.893 ]' 00:15:59.893 08:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:59.893 08:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:59.893 08:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:59.893 08:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:59.893 08:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:59.893 08:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:59.893 08:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:59.893 08:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:00.152 08:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-secret DHHC-1:01:NzQ0NzIxYmI0MDRjYjI1MjMzYzJlMmRiNWFlZDFjNjXiFtJQ: --dhchap-ctrl-secret DHHC-1:02:MmVmMmUyZjcxYmMyNjdlNGJiNTg0Yjc2ZTM5MWRkNGQxZTQ1Y2MzNTQwNmViOGQ58n44LA==: 00:16:01.089 08:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:01.089 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:01.089 08:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:16:01.089 08:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.089 08:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.089 08:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.089 08:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:01.089 08:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:01.089 08:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:01.089 08:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:16:01.089 08:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:01.089 08:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:01.089 08:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:01.089 08:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:01.089 08:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:01.089 08:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:01.089 08:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.089 08:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.089 08:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.089 08:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:01.089 08:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:01.655 00:16:01.655 08:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:01.655 08:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:01.655 08:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:01.655 08:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.655 08:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:01.655 08:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.655 08:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.655 08:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.656 08:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:01.656 { 00:16:01.656 "cntlid": 29, 00:16:01.656 "qid": 0, 00:16:01.656 "state": "enabled", 00:16:01.656 "thread": "nvmf_tgt_poll_group_000", 00:16:01.656 "listen_address": { 00:16:01.656 "trtype": "TCP", 00:16:01.656 "adrfam": "IPv4", 00:16:01.656 "traddr": "10.0.0.2", 00:16:01.656 "trsvcid": "4420" 00:16:01.656 }, 00:16:01.656 "peer_address": { 00:16:01.656 "trtype": "TCP", 00:16:01.656 "adrfam": "IPv4", 00:16:01.656 "traddr": "10.0.0.1", 00:16:01.656 "trsvcid": "43408" 00:16:01.656 }, 00:16:01.656 "auth": { 00:16:01.656 "state": "completed", 00:16:01.656 "digest": "sha256", 00:16:01.656 "dhgroup": "ffdhe4096" 00:16:01.656 } 00:16:01.656 } 00:16:01.656 ]' 00:16:01.656 08:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:01.914 08:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:01.914 08:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:01.914 08:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:01.914 08:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:01.914 08:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:01.914 08:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:01.914 08:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:02.172 08:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-secret DHHC-1:02:YmZlODA0YzdjMGI1ZjgwMjgyZDA3ZGJlYWM3MjAyN2RmMDZmMmJiOTIzY2U3MmYw8KNWIw==: --dhchap-ctrl-secret DHHC-1:01:YWI5ZjhlNjA2NjExMGY3ZTk2ODdhYTE3MTRkODk2MTL2aiAd: 00:16:03.109 08:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:03.109 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:03.109 08:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:16:03.109 08:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.109 08:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.109 08:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.109 08:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:03.109 08:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:03.109 08:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:03.368 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:16:03.368 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:03.368 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:03.368 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:03.368 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:03.368 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:03.368 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-key key3 00:16:03.368 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.368 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.368 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.368 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:03.368 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:03.627 00:16:03.627 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:03.627 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:03.627 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:03.886 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.886 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:03.886 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.886 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.886 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.886 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:03.886 { 00:16:03.886 "cntlid": 31, 00:16:03.886 "qid": 0, 00:16:03.886 "state": "enabled", 00:16:03.886 "thread": "nvmf_tgt_poll_group_000", 00:16:03.886 "listen_address": { 00:16:03.886 "trtype": "TCP", 00:16:03.886 "adrfam": "IPv4", 00:16:03.886 "traddr": "10.0.0.2", 00:16:03.886 "trsvcid": "4420" 00:16:03.886 }, 00:16:03.886 "peer_address": { 00:16:03.886 "trtype": "TCP", 00:16:03.886 "adrfam": "IPv4", 00:16:03.886 "traddr": "10.0.0.1", 00:16:03.886 "trsvcid": "43436" 00:16:03.886 }, 00:16:03.886 "auth": { 00:16:03.886 "state": "completed", 00:16:03.886 "digest": "sha256", 00:16:03.886 "dhgroup": "ffdhe4096" 00:16:03.886 } 00:16:03.886 } 00:16:03.886 ]' 00:16:03.886 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:04.145 08:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:04.145 08:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:04.145 08:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:04.145 08:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:04.145 08:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:04.145 08:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:04.145 08:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:04.403 08:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-secret DHHC-1:03:YjliYzc2OGRhM2Y1YjI4MzAyOGI2NjVmOGRkZmQ3MmQ1NjRhYTEzNDVmNjk2ZWJkOWYzMzk5ODE2M2RkOTI1MlpGH1c=: 00:16:05.337 08:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:05.337 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:05.337 08:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:16:05.337 08:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.337 08:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.337 08:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.337 08:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:05.337 08:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:05.337 08:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:05.337 08:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:05.595 08:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:16:05.595 08:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:05.595 08:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:05.595 08:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:05.595 08:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:05.595 08:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:05.595 08:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:05.595 08:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.595 08:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.595 08:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.595 08:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:05.595 08:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:05.854 00:16:05.854 08:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:05.854 08:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:05.854 08:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:06.428 08:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.428 08:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:06.428 08:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.428 08:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.428 08:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.428 08:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:06.428 { 00:16:06.428 "cntlid": 33, 00:16:06.428 "qid": 0, 00:16:06.428 "state": "enabled", 00:16:06.428 "thread": "nvmf_tgt_poll_group_000", 00:16:06.428 "listen_address": { 00:16:06.428 "trtype": "TCP", 00:16:06.428 "adrfam": "IPv4", 00:16:06.428 "traddr": "10.0.0.2", 00:16:06.428 "trsvcid": "4420" 00:16:06.428 }, 00:16:06.428 "peer_address": { 00:16:06.428 "trtype": "TCP", 00:16:06.428 "adrfam": "IPv4", 00:16:06.428 "traddr": "10.0.0.1", 00:16:06.428 "trsvcid": "43462" 00:16:06.428 }, 00:16:06.428 "auth": { 00:16:06.428 "state": "completed", 00:16:06.428 "digest": "sha256", 00:16:06.428 "dhgroup": "ffdhe6144" 00:16:06.428 } 00:16:06.428 } 00:16:06.428 ]' 00:16:06.428 08:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:06.428 08:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:06.428 08:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:06.428 08:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:06.428 08:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:06.428 08:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:06.428 08:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:06.428 08:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:06.689 08:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-secret DHHC-1:00:MjlmNzExODE4NTc2MmNmYWU0ZjYwMGI2ZjA4YTcyMjRjNGFmOGNmMTI4ZGQ5NTEyJWIzmQ==: --dhchap-ctrl-secret DHHC-1:03:NWE1MzkzODA0NDk2YThjNTlkNDYxNjBhZTliMDlkM2I2MTY3NjFmODQxMDU5MDU4ZjFhNDMxNzg4MDVlOWZjZEEK018=: 00:16:07.255 08:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:07.255 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:07.255 08:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:16:07.255 08:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.255 08:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.513 08:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.513 08:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:07.513 08:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:07.513 08:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:07.771 08:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:16:07.771 08:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:07.771 08:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:07.771 08:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:07.771 08:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:07.771 08:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:07.771 08:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:07.771 08:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.771 08:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.771 08:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.771 08:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:07.771 08:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:08.030 00:16:08.030 08:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:08.030 08:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:08.030 08:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:08.289 08:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:08.289 08:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:08.289 08:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.289 08:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.289 08:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.289 08:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:08.289 { 00:16:08.289 "cntlid": 35, 00:16:08.289 "qid": 0, 00:16:08.289 "state": "enabled", 00:16:08.289 "thread": "nvmf_tgt_poll_group_000", 00:16:08.289 "listen_address": { 00:16:08.289 "trtype": "TCP", 00:16:08.289 "adrfam": "IPv4", 00:16:08.289 "traddr": "10.0.0.2", 00:16:08.289 "trsvcid": "4420" 00:16:08.289 }, 00:16:08.289 "peer_address": { 00:16:08.289 "trtype": "TCP", 00:16:08.289 "adrfam": "IPv4", 00:16:08.289 "traddr": "10.0.0.1", 00:16:08.289 "trsvcid": "46524" 00:16:08.289 }, 00:16:08.289 "auth": { 00:16:08.289 "state": "completed", 00:16:08.289 "digest": "sha256", 00:16:08.289 "dhgroup": "ffdhe6144" 00:16:08.289 } 00:16:08.289 } 00:16:08.289 ]' 00:16:08.289 08:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:08.547 08:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:08.547 08:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:08.547 08:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:08.547 08:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:08.547 08:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:08.547 08:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:08.547 08:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:08.806 08:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-secret DHHC-1:01:NzQ0NzIxYmI0MDRjYjI1MjMzYzJlMmRiNWFlZDFjNjXiFtJQ: --dhchap-ctrl-secret DHHC-1:02:MmVmMmUyZjcxYmMyNjdlNGJiNTg0Yjc2ZTM5MWRkNGQxZTQ1Y2MzNTQwNmViOGQ58n44LA==: 00:16:09.373 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:09.373 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:09.373 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:16:09.373 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.373 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.373 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.373 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:09.373 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:09.373 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:09.631 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:16:09.631 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:09.631 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:09.631 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:09.631 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:09.631 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:09.631 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:09.631 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.631 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.631 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.631 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:09.632 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:10.197 00:16:10.198 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:10.198 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.198 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:10.456 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.456 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.456 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.456 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.456 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.456 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:10.456 { 00:16:10.456 "cntlid": 37, 00:16:10.456 "qid": 0, 00:16:10.456 "state": "enabled", 00:16:10.456 "thread": "nvmf_tgt_poll_group_000", 00:16:10.456 "listen_address": { 00:16:10.456 "trtype": "TCP", 00:16:10.456 "adrfam": "IPv4", 00:16:10.456 "traddr": "10.0.0.2", 00:16:10.456 "trsvcid": "4420" 00:16:10.456 }, 00:16:10.456 "peer_address": { 00:16:10.456 "trtype": "TCP", 00:16:10.456 "adrfam": "IPv4", 00:16:10.456 "traddr": "10.0.0.1", 00:16:10.456 "trsvcid": "46552" 00:16:10.456 }, 00:16:10.456 "auth": { 00:16:10.456 "state": "completed", 00:16:10.456 "digest": "sha256", 00:16:10.456 "dhgroup": "ffdhe6144" 00:16:10.456 } 00:16:10.456 } 00:16:10.456 ]' 00:16:10.456 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:10.456 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:10.456 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:10.456 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:10.456 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:10.456 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:10.456 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:10.456 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:11.022 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-secret DHHC-1:02:YmZlODA0YzdjMGI1ZjgwMjgyZDA3ZGJlYWM3MjAyN2RmMDZmMmJiOTIzY2U3MmYw8KNWIw==: --dhchap-ctrl-secret DHHC-1:01:YWI5ZjhlNjA2NjExMGY3ZTk2ODdhYTE3MTRkODk2MTL2aiAd: 00:16:11.588 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.588 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.588 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:16:11.588 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.588 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.588 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.588 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:11.588 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:11.588 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:11.847 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:16:11.847 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:11.847 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:11.847 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:11.847 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:11.847 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:11.847 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-key key3 00:16:11.847 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.847 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.847 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.847 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:11.847 08:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:12.481 00:16:12.481 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:12.481 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:12.481 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:12.481 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.481 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:12.481 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.481 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.481 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.481 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:12.481 { 00:16:12.481 "cntlid": 39, 00:16:12.481 "qid": 0, 00:16:12.481 "state": "enabled", 00:16:12.481 "thread": "nvmf_tgt_poll_group_000", 00:16:12.481 "listen_address": { 00:16:12.481 "trtype": "TCP", 00:16:12.481 "adrfam": "IPv4", 00:16:12.481 "traddr": "10.0.0.2", 00:16:12.481 "trsvcid": "4420" 00:16:12.481 }, 00:16:12.481 "peer_address": { 00:16:12.481 "trtype": "TCP", 00:16:12.481 "adrfam": "IPv4", 00:16:12.481 "traddr": "10.0.0.1", 00:16:12.481 "trsvcid": "46586" 00:16:12.481 }, 00:16:12.481 "auth": { 00:16:12.481 "state": "completed", 00:16:12.481 "digest": "sha256", 00:16:12.481 "dhgroup": "ffdhe6144" 00:16:12.481 } 00:16:12.481 } 00:16:12.481 ]' 00:16:12.481 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:12.481 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:12.481 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:12.741 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:12.741 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:12.741 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.741 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.741 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:13.001 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-secret DHHC-1:03:YjliYzc2OGRhM2Y1YjI4MzAyOGI2NjVmOGRkZmQ3MmQ1NjRhYTEzNDVmNjk2ZWJkOWYzMzk5ODE2M2RkOTI1MlpGH1c=: 00:16:13.569 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.569 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.569 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:16:13.569 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.569 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.569 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.569 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:13.569 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:13.569 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:13.569 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:14.135 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:16:14.135 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:14.135 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:14.135 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:14.135 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:14.135 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:14.135 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:14.135 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.135 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.135 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.135 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:14.135 08:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:14.701 00:16:14.702 08:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:14.702 08:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:14.702 08:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.959 08:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.959 08:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:14.959 08:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.959 08:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.959 08:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.959 08:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:14.959 { 00:16:14.959 "cntlid": 41, 00:16:14.959 "qid": 0, 00:16:14.959 "state": "enabled", 00:16:14.959 "thread": "nvmf_tgt_poll_group_000", 00:16:14.959 "listen_address": { 00:16:14.959 "trtype": "TCP", 00:16:14.959 "adrfam": "IPv4", 00:16:14.959 "traddr": "10.0.0.2", 00:16:14.959 "trsvcid": "4420" 00:16:14.959 }, 00:16:14.959 "peer_address": { 00:16:14.959 "trtype": "TCP", 00:16:14.959 "adrfam": "IPv4", 00:16:14.959 "traddr": "10.0.0.1", 00:16:14.959 "trsvcid": "46602" 00:16:14.959 }, 00:16:14.959 "auth": { 00:16:14.959 "state": "completed", 00:16:14.959 "digest": "sha256", 00:16:14.959 "dhgroup": "ffdhe8192" 00:16:14.959 } 00:16:14.959 } 00:16:14.959 ]' 00:16:14.959 08:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:14.959 08:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:14.959 08:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:14.959 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:14.959 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:15.239 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:15.239 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:15.239 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:15.514 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-secret DHHC-1:00:MjlmNzExODE4NTc2MmNmYWU0ZjYwMGI2ZjA4YTcyMjRjNGFmOGNmMTI4ZGQ5NTEyJWIzmQ==: --dhchap-ctrl-secret DHHC-1:03:NWE1MzkzODA0NDk2YThjNTlkNDYxNjBhZTliMDlkM2I2MTY3NjFmODQxMDU5MDU4ZjFhNDMxNzg4MDVlOWZjZEEK018=: 00:16:16.080 08:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.080 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.080 08:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:16:16.080 08:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.080 08:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.080 08:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.080 08:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:16.080 08:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:16.080 08:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:16.347 08:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:16:16.347 08:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:16.347 08:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:16.347 08:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:16.347 08:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:16.347 08:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:16.347 08:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:16.347 08:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.347 08:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.347 08:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.347 08:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:16.347 08:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:16.919 00:16:16.919 08:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:16.919 08:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:16.919 08:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:17.485 08:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.485 08:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:17.485 08:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.485 08:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.485 08:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.485 08:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:17.485 { 00:16:17.485 "cntlid": 43, 00:16:17.485 "qid": 0, 00:16:17.485 "state": "enabled", 00:16:17.485 "thread": "nvmf_tgt_poll_group_000", 00:16:17.485 "listen_address": { 00:16:17.485 "trtype": "TCP", 00:16:17.485 "adrfam": "IPv4", 00:16:17.485 "traddr": "10.0.0.2", 00:16:17.485 "trsvcid": "4420" 00:16:17.485 }, 00:16:17.485 "peer_address": { 00:16:17.485 "trtype": "TCP", 00:16:17.485 "adrfam": "IPv4", 00:16:17.485 "traddr": "10.0.0.1", 00:16:17.485 "trsvcid": "40462" 00:16:17.485 }, 00:16:17.485 "auth": { 00:16:17.485 "state": "completed", 00:16:17.485 "digest": "sha256", 00:16:17.485 "dhgroup": "ffdhe8192" 00:16:17.485 } 00:16:17.485 } 00:16:17.485 ]' 00:16:17.485 08:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:17.485 08:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:17.485 08:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:17.485 08:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:17.485 08:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:17.485 08:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:17.485 08:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.485 08:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.744 08:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-secret DHHC-1:01:NzQ0NzIxYmI0MDRjYjI1MjMzYzJlMmRiNWFlZDFjNjXiFtJQ: --dhchap-ctrl-secret DHHC-1:02:MmVmMmUyZjcxYmMyNjdlNGJiNTg0Yjc2ZTM5MWRkNGQxZTQ1Y2MzNTQwNmViOGQ58n44LA==: 00:16:18.309 08:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:18.309 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:18.309 08:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:16:18.309 08:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.309 08:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.309 08:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.309 08:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:18.309 08:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:18.309 08:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:18.875 08:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:16:18.875 08:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:18.875 08:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:18.875 08:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:18.875 08:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:18.875 08:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.875 08:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:18.875 08:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.875 08:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.875 08:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.875 08:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:18.875 08:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.441 00:16:19.441 08:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:19.441 08:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:19.441 08:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:19.441 08:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.441 08:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:19.441 08:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.441 08:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.441 08:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.441 08:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:19.442 { 00:16:19.442 "cntlid": 45, 00:16:19.442 "qid": 0, 00:16:19.442 "state": "enabled", 00:16:19.442 "thread": "nvmf_tgt_poll_group_000", 00:16:19.442 "listen_address": { 00:16:19.442 "trtype": "TCP", 00:16:19.442 "adrfam": "IPv4", 00:16:19.442 "traddr": "10.0.0.2", 00:16:19.442 "trsvcid": "4420" 00:16:19.442 }, 00:16:19.442 "peer_address": { 00:16:19.442 "trtype": "TCP", 00:16:19.442 "adrfam": "IPv4", 00:16:19.442 "traddr": "10.0.0.1", 00:16:19.442 "trsvcid": "40504" 00:16:19.442 }, 00:16:19.442 "auth": { 00:16:19.442 "state": "completed", 00:16:19.442 "digest": "sha256", 00:16:19.442 "dhgroup": "ffdhe8192" 00:16:19.442 } 00:16:19.442 } 00:16:19.442 ]' 00:16:19.442 08:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:19.700 08:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:19.700 08:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:19.700 08:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:19.700 08:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:19.700 08:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:19.700 08:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.700 08:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.958 08:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-secret DHHC-1:02:YmZlODA0YzdjMGI1ZjgwMjgyZDA3ZGJlYWM3MjAyN2RmMDZmMmJiOTIzY2U3MmYw8KNWIw==: --dhchap-ctrl-secret DHHC-1:01:YWI5ZjhlNjA2NjExMGY3ZTk2ODdhYTE3MTRkODk2MTL2aiAd: 00:16:20.523 08:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:20.523 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:20.524 08:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:16:20.524 08:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.524 08:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.524 08:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.524 08:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:20.524 08:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:20.524 08:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:21.090 08:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:16:21.090 08:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:21.090 08:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:21.090 08:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:21.090 08:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:21.090 08:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:21.090 08:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-key key3 00:16:21.090 08:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.090 08:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.090 08:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.090 08:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:21.090 08:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:21.655 00:16:21.655 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:21.655 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.655 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:21.913 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.913 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.913 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.913 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.913 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.913 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:21.913 { 00:16:21.913 "cntlid": 47, 00:16:21.913 "qid": 0, 00:16:21.913 "state": "enabled", 00:16:21.913 "thread": "nvmf_tgt_poll_group_000", 00:16:21.913 "listen_address": { 00:16:21.913 "trtype": "TCP", 00:16:21.913 "adrfam": "IPv4", 00:16:21.913 "traddr": "10.0.0.2", 00:16:21.913 "trsvcid": "4420" 00:16:21.913 }, 00:16:21.913 "peer_address": { 00:16:21.913 "trtype": "TCP", 00:16:21.913 "adrfam": "IPv4", 00:16:21.913 "traddr": "10.0.0.1", 00:16:21.913 "trsvcid": "40536" 00:16:21.913 }, 00:16:21.913 "auth": { 00:16:21.913 "state": "completed", 00:16:21.913 "digest": "sha256", 00:16:21.913 "dhgroup": "ffdhe8192" 00:16:21.913 } 00:16:21.913 } 00:16:21.913 ]' 00:16:21.913 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:21.913 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:21.913 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:21.913 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:21.913 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:21.913 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.913 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.913 08:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.172 08:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-secret DHHC-1:03:YjliYzc2OGRhM2Y1YjI4MzAyOGI2NjVmOGRkZmQ3MmQ1NjRhYTEzNDVmNjk2ZWJkOWYzMzk5ODE2M2RkOTI1MlpGH1c=: 00:16:23.107 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:23.107 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:23.107 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:16:23.107 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.107 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.107 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.107 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:16:23.107 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:23.107 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:23.107 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:23.108 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:23.366 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:16:23.366 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:23.366 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:23.366 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:23.366 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:23.366 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:23.366 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:23.366 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.367 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.367 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.367 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:23.367 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:23.625 00:16:23.625 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:23.625 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:23.625 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.884 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.885 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.885 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.885 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.885 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.885 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:23.885 { 00:16:23.885 "cntlid": 49, 00:16:23.885 "qid": 0, 00:16:23.885 "state": "enabled", 00:16:23.885 "thread": "nvmf_tgt_poll_group_000", 00:16:23.885 "listen_address": { 00:16:23.885 "trtype": "TCP", 00:16:23.885 "adrfam": "IPv4", 00:16:23.885 "traddr": "10.0.0.2", 00:16:23.885 "trsvcid": "4420" 00:16:23.885 }, 00:16:23.885 "peer_address": { 00:16:23.885 "trtype": "TCP", 00:16:23.885 "adrfam": "IPv4", 00:16:23.885 "traddr": "10.0.0.1", 00:16:23.885 "trsvcid": "40552" 00:16:23.885 }, 00:16:23.885 "auth": { 00:16:23.885 "state": "completed", 00:16:23.885 "digest": "sha384", 00:16:23.885 "dhgroup": "null" 00:16:23.885 } 00:16:23.885 } 00:16:23.885 ]' 00:16:23.885 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:23.885 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:23.885 08:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:24.143 08:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:24.143 08:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:24.143 08:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:24.143 08:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:24.143 08:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.401 08:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-secret DHHC-1:00:MjlmNzExODE4NTc2MmNmYWU0ZjYwMGI2ZjA4YTcyMjRjNGFmOGNmMTI4ZGQ5NTEyJWIzmQ==: --dhchap-ctrl-secret DHHC-1:03:NWE1MzkzODA0NDk2YThjNTlkNDYxNjBhZTliMDlkM2I2MTY3NjFmODQxMDU5MDU4ZjFhNDMxNzg4MDVlOWZjZEEK018=: 00:16:24.968 08:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.968 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.968 08:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:16:24.968 08:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.968 08:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.968 08:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.968 08:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:24.968 08:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:24.968 08:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:25.226 08:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:16:25.226 08:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:25.226 08:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:25.226 08:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:25.226 08:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:25.226 08:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.226 08:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.226 08:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.226 08:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.485 08:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.485 08:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.485 08:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.744 00:16:25.744 08:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:25.744 08:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.744 08:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:26.002 08:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.002 08:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.002 08:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.002 08:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.002 08:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.002 08:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:26.002 { 00:16:26.002 "cntlid": 51, 00:16:26.002 "qid": 0, 00:16:26.002 "state": "enabled", 00:16:26.002 "thread": "nvmf_tgt_poll_group_000", 00:16:26.002 "listen_address": { 00:16:26.002 "trtype": "TCP", 00:16:26.002 "adrfam": "IPv4", 00:16:26.002 "traddr": "10.0.0.2", 00:16:26.002 "trsvcid": "4420" 00:16:26.002 }, 00:16:26.002 "peer_address": { 00:16:26.002 "trtype": "TCP", 00:16:26.002 "adrfam": "IPv4", 00:16:26.002 "traddr": "10.0.0.1", 00:16:26.002 "trsvcid": "40580" 00:16:26.002 }, 00:16:26.002 "auth": { 00:16:26.002 "state": "completed", 00:16:26.002 "digest": "sha384", 00:16:26.002 "dhgroup": "null" 00:16:26.002 } 00:16:26.002 } 00:16:26.002 ]' 00:16:26.002 08:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:26.002 08:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:26.002 08:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:26.002 08:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:26.002 08:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:26.261 08:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.261 08:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.261 08:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:26.518 08:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-secret DHHC-1:01:NzQ0NzIxYmI0MDRjYjI1MjMzYzJlMmRiNWFlZDFjNjXiFtJQ: --dhchap-ctrl-secret DHHC-1:02:MmVmMmUyZjcxYmMyNjdlNGJiNTg0Yjc2ZTM5MWRkNGQxZTQ1Y2MzNTQwNmViOGQ58n44LA==: 00:16:27.083 08:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.083 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.083 08:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:16:27.083 08:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.083 08:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.083 08:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.083 08:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:27.083 08:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:27.083 08:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:27.341 08:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:16:27.341 08:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:27.341 08:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:27.341 08:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:27.341 08:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:27.341 08:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.341 08:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.341 08:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.341 08:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.341 08:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.341 08:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.341 08:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.671 00:16:27.671 08:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:27.671 08:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:27.671 08:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.929 08:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.929 08:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:27.929 08:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.929 08:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.929 08:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.929 08:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:27.929 { 00:16:27.929 "cntlid": 53, 00:16:27.929 "qid": 0, 00:16:27.929 "state": "enabled", 00:16:27.929 "thread": "nvmf_tgt_poll_group_000", 00:16:27.929 "listen_address": { 00:16:27.929 "trtype": "TCP", 00:16:27.929 "adrfam": "IPv4", 00:16:27.929 "traddr": "10.0.0.2", 00:16:27.929 "trsvcid": "4420" 00:16:27.929 }, 00:16:27.929 "peer_address": { 00:16:27.929 "trtype": "TCP", 00:16:27.929 "adrfam": "IPv4", 00:16:27.929 "traddr": "10.0.0.1", 00:16:27.929 "trsvcid": "48172" 00:16:27.929 }, 00:16:27.929 "auth": { 00:16:27.929 "state": "completed", 00:16:27.929 "digest": "sha384", 00:16:27.929 "dhgroup": "null" 00:16:27.929 } 00:16:27.929 } 00:16:27.929 ]' 00:16:27.929 08:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:27.929 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:27.929 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:28.188 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:28.188 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:28.188 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.188 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.188 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.447 08:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-secret DHHC-1:02:YmZlODA0YzdjMGI1ZjgwMjgyZDA3ZGJlYWM3MjAyN2RmMDZmMmJiOTIzY2U3MmYw8KNWIw==: --dhchap-ctrl-secret DHHC-1:01:YWI5ZjhlNjA2NjExMGY3ZTk2ODdhYTE3MTRkODk2MTL2aiAd: 00:16:29.014 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.272 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.272 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:16:29.272 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.272 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.272 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.272 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:29.272 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:29.272 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:29.530 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:16:29.530 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:29.530 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:29.530 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:29.530 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:29.530 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.530 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-key key3 00:16:29.530 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.530 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.530 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.530 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:29.530 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:29.787 00:16:29.787 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:29.787 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:29.787 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.045 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.045 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.045 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.045 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.045 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.045 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:30.045 { 00:16:30.045 "cntlid": 55, 00:16:30.045 "qid": 0, 00:16:30.045 "state": "enabled", 00:16:30.045 "thread": "nvmf_tgt_poll_group_000", 00:16:30.045 "listen_address": { 00:16:30.045 "trtype": "TCP", 00:16:30.045 "adrfam": "IPv4", 00:16:30.045 "traddr": "10.0.0.2", 00:16:30.045 "trsvcid": "4420" 00:16:30.045 }, 00:16:30.045 "peer_address": { 00:16:30.045 "trtype": "TCP", 00:16:30.045 "adrfam": "IPv4", 00:16:30.045 "traddr": "10.0.0.1", 00:16:30.045 "trsvcid": "48190" 00:16:30.045 }, 00:16:30.045 "auth": { 00:16:30.045 "state": "completed", 00:16:30.045 "digest": "sha384", 00:16:30.045 "dhgroup": "null" 00:16:30.045 } 00:16:30.045 } 00:16:30.045 ]' 00:16:30.045 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:30.045 08:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:30.045 08:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:30.045 08:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:30.045 08:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:30.045 08:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.045 08:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.045 08:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.304 08:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-secret DHHC-1:03:YjliYzc2OGRhM2Y1YjI4MzAyOGI2NjVmOGRkZmQ3MmQ1NjRhYTEzNDVmNjk2ZWJkOWYzMzk5ODE2M2RkOTI1MlpGH1c=: 00:16:31.238 08:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.238 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.238 08:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:16:31.238 08:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.238 08:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.238 08:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.238 08:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:31.238 08:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:31.238 08:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:31.238 08:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:31.238 08:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:16:31.238 08:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:31.238 08:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:31.238 08:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:31.238 08:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:31.238 08:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.238 08:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.238 08:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.238 08:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.238 08:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.238 08:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.238 08:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.802 00:16:31.802 08:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:31.802 08:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:31.802 08:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.059 08:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.059 08:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:32.059 08:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.059 08:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.059 08:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.059 08:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:32.059 { 00:16:32.059 "cntlid": 57, 00:16:32.059 "qid": 0, 00:16:32.059 "state": "enabled", 00:16:32.059 "thread": "nvmf_tgt_poll_group_000", 00:16:32.059 "listen_address": { 00:16:32.059 "trtype": "TCP", 00:16:32.059 "adrfam": "IPv4", 00:16:32.059 "traddr": "10.0.0.2", 00:16:32.059 "trsvcid": "4420" 00:16:32.059 }, 00:16:32.059 "peer_address": { 00:16:32.059 "trtype": "TCP", 00:16:32.059 "adrfam": "IPv4", 00:16:32.059 "traddr": "10.0.0.1", 00:16:32.059 "trsvcid": "48212" 00:16:32.059 }, 00:16:32.059 "auth": { 00:16:32.059 "state": "completed", 00:16:32.059 "digest": "sha384", 00:16:32.059 "dhgroup": "ffdhe2048" 00:16:32.059 } 00:16:32.059 } 00:16:32.059 ]' 00:16:32.059 08:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:32.059 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:32.059 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:32.059 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:32.059 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:32.059 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.059 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.059 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.317 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-secret DHHC-1:00:MjlmNzExODE4NTc2MmNmYWU0ZjYwMGI2ZjA4YTcyMjRjNGFmOGNmMTI4ZGQ5NTEyJWIzmQ==: --dhchap-ctrl-secret DHHC-1:03:NWE1MzkzODA0NDk2YThjNTlkNDYxNjBhZTliMDlkM2I2MTY3NjFmODQxMDU5MDU4ZjFhNDMxNzg4MDVlOWZjZEEK018=: 00:16:33.250 08:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.250 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.250 08:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:16:33.250 08:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.250 08:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.250 08:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.250 08:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:33.250 08:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:33.250 08:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:33.508 08:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:16:33.508 08:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:33.508 08:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:33.508 08:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:33.508 08:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:33.508 08:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.508 08:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.508 08:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.508 08:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.508 08:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.508 08:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.508 08:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.766 00:16:33.766 08:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:33.766 08:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:33.766 08:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.025 08:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.025 08:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.025 08:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.025 08:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.025 08:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.025 08:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:34.025 { 00:16:34.025 "cntlid": 59, 00:16:34.025 "qid": 0, 00:16:34.025 "state": "enabled", 00:16:34.025 "thread": "nvmf_tgt_poll_group_000", 00:16:34.025 "listen_address": { 00:16:34.025 "trtype": "TCP", 00:16:34.025 "adrfam": "IPv4", 00:16:34.025 "traddr": "10.0.0.2", 00:16:34.025 "trsvcid": "4420" 00:16:34.025 }, 00:16:34.025 "peer_address": { 00:16:34.025 "trtype": "TCP", 00:16:34.025 "adrfam": "IPv4", 00:16:34.025 "traddr": "10.0.0.1", 00:16:34.025 "trsvcid": "48242" 00:16:34.025 }, 00:16:34.025 "auth": { 00:16:34.025 "state": "completed", 00:16:34.025 "digest": "sha384", 00:16:34.025 "dhgroup": "ffdhe2048" 00:16:34.025 } 00:16:34.025 } 00:16:34.025 ]' 00:16:34.025 08:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:34.025 08:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:34.025 08:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:34.025 08:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:34.025 08:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:34.283 08:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.283 08:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.283 08:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.540 08:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-secret DHHC-1:01:NzQ0NzIxYmI0MDRjYjI1MjMzYzJlMmRiNWFlZDFjNjXiFtJQ: --dhchap-ctrl-secret DHHC-1:02:MmVmMmUyZjcxYmMyNjdlNGJiNTg0Yjc2ZTM5MWRkNGQxZTQ1Y2MzNTQwNmViOGQ58n44LA==: 00:16:35.106 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.106 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.106 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:16:35.106 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.106 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.106 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.106 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:35.106 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:35.106 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:35.364 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:16:35.364 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:35.364 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:35.364 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:35.364 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:35.364 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.364 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.364 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.364 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.364 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.364 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.364 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.621 00:16:35.879 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:35.879 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:35.879 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:35.879 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.879 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:35.879 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.879 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.136 08:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.136 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:36.136 { 00:16:36.136 "cntlid": 61, 00:16:36.136 "qid": 0, 00:16:36.136 "state": "enabled", 00:16:36.136 "thread": "nvmf_tgt_poll_group_000", 00:16:36.136 "listen_address": { 00:16:36.136 "trtype": "TCP", 00:16:36.136 "adrfam": "IPv4", 00:16:36.136 "traddr": "10.0.0.2", 00:16:36.136 "trsvcid": "4420" 00:16:36.136 }, 00:16:36.136 "peer_address": { 00:16:36.136 "trtype": "TCP", 00:16:36.136 "adrfam": "IPv4", 00:16:36.136 "traddr": "10.0.0.1", 00:16:36.136 "trsvcid": "48264" 00:16:36.136 }, 00:16:36.136 "auth": { 00:16:36.136 "state": "completed", 00:16:36.136 "digest": "sha384", 00:16:36.136 "dhgroup": "ffdhe2048" 00:16:36.136 } 00:16:36.136 } 00:16:36.136 ]' 00:16:36.136 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:36.136 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:36.136 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:36.136 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:36.136 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:36.136 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.136 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.136 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.434 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-secret DHHC-1:02:YmZlODA0YzdjMGI1ZjgwMjgyZDA3ZGJlYWM3MjAyN2RmMDZmMmJiOTIzY2U3MmYw8KNWIw==: --dhchap-ctrl-secret DHHC-1:01:YWI5ZjhlNjA2NjExMGY3ZTk2ODdhYTE3MTRkODk2MTL2aiAd: 00:16:36.999 08:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.999 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.999 08:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:16:37.000 08:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.000 08:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.000 08:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.000 08:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:37.000 08:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:37.000 08:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:37.285 08:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:16:37.285 08:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:37.285 08:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:37.285 08:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:37.286 08:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:37.286 08:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.286 08:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-key key3 00:16:37.286 08:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.286 08:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.286 08:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.286 08:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:37.286 08:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:37.545 00:16:37.545 08:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:37.545 08:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.545 08:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:37.803 08:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.803 08:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.803 08:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.803 08:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.803 08:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.803 08:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:37.803 { 00:16:37.803 "cntlid": 63, 00:16:37.803 "qid": 0, 00:16:37.803 "state": "enabled", 00:16:37.803 "thread": "nvmf_tgt_poll_group_000", 00:16:37.803 "listen_address": { 00:16:37.803 "trtype": "TCP", 00:16:37.803 "adrfam": "IPv4", 00:16:37.803 "traddr": "10.0.0.2", 00:16:37.803 "trsvcid": "4420" 00:16:37.803 }, 00:16:37.803 "peer_address": { 00:16:37.803 "trtype": "TCP", 00:16:37.803 "adrfam": "IPv4", 00:16:37.803 "traddr": "10.0.0.1", 00:16:37.803 "trsvcid": "38982" 00:16:37.803 }, 00:16:37.803 "auth": { 00:16:37.803 "state": "completed", 00:16:37.803 "digest": "sha384", 00:16:37.803 "dhgroup": "ffdhe2048" 00:16:37.803 } 00:16:37.803 } 00:16:37.803 ]' 00:16:37.803 08:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:38.061 08:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:38.061 08:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:38.061 08:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:38.061 08:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:38.061 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.061 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.061 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.319 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-secret DHHC-1:03:YjliYzc2OGRhM2Y1YjI4MzAyOGI2NjVmOGRkZmQ3MmQ1NjRhYTEzNDVmNjk2ZWJkOWYzMzk5ODE2M2RkOTI1MlpGH1c=: 00:16:38.884 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.884 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.884 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:16:38.884 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.884 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.884 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.884 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:38.884 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:38.884 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:38.884 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:39.450 08:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:16:39.450 08:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:39.450 08:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:39.450 08:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:39.450 08:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:39.450 08:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.450 08:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.450 08:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.450 08:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.450 08:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.450 08:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.450 08:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.708 00:16:39.708 08:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:39.708 08:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.708 08:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:39.967 08:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.967 08:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.967 08:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.967 08:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.967 08:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.967 08:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:39.967 { 00:16:39.967 "cntlid": 65, 00:16:39.967 "qid": 0, 00:16:39.967 "state": "enabled", 00:16:39.967 "thread": "nvmf_tgt_poll_group_000", 00:16:39.967 "listen_address": { 00:16:39.967 "trtype": "TCP", 00:16:39.967 "adrfam": "IPv4", 00:16:39.967 "traddr": "10.0.0.2", 00:16:39.967 "trsvcid": "4420" 00:16:39.967 }, 00:16:39.967 "peer_address": { 00:16:39.967 "trtype": "TCP", 00:16:39.967 "adrfam": "IPv4", 00:16:39.967 "traddr": "10.0.0.1", 00:16:39.967 "trsvcid": "39016" 00:16:39.967 }, 00:16:39.967 "auth": { 00:16:39.967 "state": "completed", 00:16:39.967 "digest": "sha384", 00:16:39.967 "dhgroup": "ffdhe3072" 00:16:39.967 } 00:16:39.967 } 00:16:39.967 ]' 00:16:39.967 08:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:39.967 08:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:39.967 08:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:39.967 08:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:39.967 08:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:40.225 08:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.225 08:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.225 08:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.485 08:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-secret DHHC-1:00:MjlmNzExODE4NTc2MmNmYWU0ZjYwMGI2ZjA4YTcyMjRjNGFmOGNmMTI4ZGQ5NTEyJWIzmQ==: --dhchap-ctrl-secret DHHC-1:03:NWE1MzkzODA0NDk2YThjNTlkNDYxNjBhZTliMDlkM2I2MTY3NjFmODQxMDU5MDU4ZjFhNDMxNzg4MDVlOWZjZEEK018=: 00:16:41.051 08:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.051 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.051 08:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:16:41.051 08:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.051 08:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.051 08:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.051 08:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:41.051 08:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:41.051 08:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:41.309 08:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:16:41.309 08:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:41.309 08:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:41.309 08:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:41.309 08:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:41.309 08:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.309 08:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.309 08:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.309 08:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.309 08:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.309 08:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.309 08:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.876 00:16:41.876 08:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:41.876 08:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.876 08:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:42.134 08:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.134 08:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.134 08:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.134 08:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.134 08:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.134 08:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:42.134 { 00:16:42.134 "cntlid": 67, 00:16:42.134 "qid": 0, 00:16:42.134 "state": "enabled", 00:16:42.134 "thread": "nvmf_tgt_poll_group_000", 00:16:42.134 "listen_address": { 00:16:42.134 "trtype": "TCP", 00:16:42.134 "adrfam": "IPv4", 00:16:42.134 "traddr": "10.0.0.2", 00:16:42.134 "trsvcid": "4420" 00:16:42.134 }, 00:16:42.134 "peer_address": { 00:16:42.134 "trtype": "TCP", 00:16:42.134 "adrfam": "IPv4", 00:16:42.134 "traddr": "10.0.0.1", 00:16:42.134 "trsvcid": "39036" 00:16:42.134 }, 00:16:42.134 "auth": { 00:16:42.134 "state": "completed", 00:16:42.134 "digest": "sha384", 00:16:42.134 "dhgroup": "ffdhe3072" 00:16:42.134 } 00:16:42.134 } 00:16:42.134 ]' 00:16:42.134 08:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:42.134 08:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:42.134 08:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:42.134 08:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:42.134 08:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:42.134 08:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.134 08:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.134 08:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.393 08:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-secret DHHC-1:01:NzQ0NzIxYmI0MDRjYjI1MjMzYzJlMmRiNWFlZDFjNjXiFtJQ: --dhchap-ctrl-secret DHHC-1:02:MmVmMmUyZjcxYmMyNjdlNGJiNTg0Yjc2ZTM5MWRkNGQxZTQ1Y2MzNTQwNmViOGQ58n44LA==: 00:16:42.960 08:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.960 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.960 08:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:16:42.960 08:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.960 08:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.960 08:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.960 08:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:42.960 08:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:42.960 08:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:43.569 08:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:16:43.569 08:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:43.569 08:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:43.569 08:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:43.569 08:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:43.569 08:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.569 08:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.569 08:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.569 08:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.569 08:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.569 08:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.569 08:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.850 00:16:43.850 08:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:43.850 08:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:43.850 08:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.850 08:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.850 08:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.850 08:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.850 08:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.109 08:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.109 08:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:44.109 { 00:16:44.109 "cntlid": 69, 00:16:44.109 "qid": 0, 00:16:44.109 "state": "enabled", 00:16:44.109 "thread": "nvmf_tgt_poll_group_000", 00:16:44.109 "listen_address": { 00:16:44.109 "trtype": "TCP", 00:16:44.109 "adrfam": "IPv4", 00:16:44.109 "traddr": "10.0.0.2", 00:16:44.109 "trsvcid": "4420" 00:16:44.109 }, 00:16:44.109 "peer_address": { 00:16:44.109 "trtype": "TCP", 00:16:44.109 "adrfam": "IPv4", 00:16:44.109 "traddr": "10.0.0.1", 00:16:44.109 "trsvcid": "39072" 00:16:44.109 }, 00:16:44.109 "auth": { 00:16:44.109 "state": "completed", 00:16:44.109 "digest": "sha384", 00:16:44.109 "dhgroup": "ffdhe3072" 00:16:44.109 } 00:16:44.109 } 00:16:44.109 ]' 00:16:44.109 08:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:44.109 08:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:44.109 08:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:44.109 08:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:44.109 08:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:44.109 08:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.109 08:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.109 08:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.368 08:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-secret DHHC-1:02:YmZlODA0YzdjMGI1ZjgwMjgyZDA3ZGJlYWM3MjAyN2RmMDZmMmJiOTIzY2U3MmYw8KNWIw==: --dhchap-ctrl-secret DHHC-1:01:YWI5ZjhlNjA2NjExMGY3ZTk2ODdhYTE3MTRkODk2MTL2aiAd: 00:16:45.372 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.372 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.372 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:16:45.372 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.372 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.372 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.372 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:45.372 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:45.372 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:45.372 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:16:45.372 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:45.373 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:45.373 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:45.373 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:45.373 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.373 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-key key3 00:16:45.373 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.373 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.373 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.373 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:45.373 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:45.939 00:16:45.939 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:45.939 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:45.939 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.939 08:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.939 08:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.939 08:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.939 08:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.197 08:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.197 08:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:46.197 { 00:16:46.197 "cntlid": 71, 00:16:46.197 "qid": 0, 00:16:46.197 "state": "enabled", 00:16:46.197 "thread": "nvmf_tgt_poll_group_000", 00:16:46.197 "listen_address": { 00:16:46.197 "trtype": "TCP", 00:16:46.197 "adrfam": "IPv4", 00:16:46.197 "traddr": "10.0.0.2", 00:16:46.197 "trsvcid": "4420" 00:16:46.197 }, 00:16:46.197 "peer_address": { 00:16:46.197 "trtype": "TCP", 00:16:46.197 "adrfam": "IPv4", 00:16:46.197 "traddr": "10.0.0.1", 00:16:46.197 "trsvcid": "39110" 00:16:46.197 }, 00:16:46.197 "auth": { 00:16:46.197 "state": "completed", 00:16:46.197 "digest": "sha384", 00:16:46.197 "dhgroup": "ffdhe3072" 00:16:46.197 } 00:16:46.197 } 00:16:46.197 ]' 00:16:46.197 08:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:46.197 08:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:46.197 08:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:46.197 08:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:46.197 08:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:46.197 08:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.197 08:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.197 08:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.453 08:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-secret DHHC-1:03:YjliYzc2OGRhM2Y1YjI4MzAyOGI2NjVmOGRkZmQ3MmQ1NjRhYTEzNDVmNjk2ZWJkOWYzMzk5ODE2M2RkOTI1MlpGH1c=: 00:16:47.385 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.385 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.385 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:16:47.385 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.385 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.385 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.385 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:47.385 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:47.385 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:47.385 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:47.385 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:16:47.386 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:47.386 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:47.386 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:47.386 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:47.386 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.386 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.386 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.386 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.386 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.386 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.386 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.951 00:16:47.951 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:47.951 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:47.951 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.209 08:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.209 08:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.209 08:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.209 08:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.209 08:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.209 08:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:48.209 { 00:16:48.209 "cntlid": 73, 00:16:48.209 "qid": 0, 00:16:48.209 "state": "enabled", 00:16:48.209 "thread": "nvmf_tgt_poll_group_000", 00:16:48.209 "listen_address": { 00:16:48.209 "trtype": "TCP", 00:16:48.209 "adrfam": "IPv4", 00:16:48.209 "traddr": "10.0.0.2", 00:16:48.209 "trsvcid": "4420" 00:16:48.209 }, 00:16:48.209 "peer_address": { 00:16:48.209 "trtype": "TCP", 00:16:48.209 "adrfam": "IPv4", 00:16:48.209 "traddr": "10.0.0.1", 00:16:48.209 "trsvcid": "42876" 00:16:48.209 }, 00:16:48.209 "auth": { 00:16:48.209 "state": "completed", 00:16:48.209 "digest": "sha384", 00:16:48.209 "dhgroup": "ffdhe4096" 00:16:48.209 } 00:16:48.209 } 00:16:48.209 ]' 00:16:48.209 08:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:48.209 08:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:48.209 08:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:48.209 08:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:48.209 08:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:48.209 08:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.209 08:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.209 08:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.774 08:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-secret DHHC-1:00:MjlmNzExODE4NTc2MmNmYWU0ZjYwMGI2ZjA4YTcyMjRjNGFmOGNmMTI4ZGQ5NTEyJWIzmQ==: --dhchap-ctrl-secret DHHC-1:03:NWE1MzkzODA0NDk2YThjNTlkNDYxNjBhZTliMDlkM2I2MTY3NjFmODQxMDU5MDU4ZjFhNDMxNzg4MDVlOWZjZEEK018=: 00:16:49.403 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.403 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.403 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:16:49.403 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.403 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.403 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.403 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:49.403 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:49.403 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:49.666 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:16:49.666 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:49.666 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:49.666 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:49.666 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:49.666 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.666 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.666 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.666 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.666 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.666 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.666 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.925 00:16:49.925 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:49.925 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:49.925 08:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.184 08:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.184 08:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.184 08:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.184 08:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.184 08:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.184 08:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:50.184 { 00:16:50.184 "cntlid": 75, 00:16:50.184 "qid": 0, 00:16:50.184 "state": "enabled", 00:16:50.184 "thread": "nvmf_tgt_poll_group_000", 00:16:50.184 "listen_address": { 00:16:50.184 "trtype": "TCP", 00:16:50.184 "adrfam": "IPv4", 00:16:50.184 "traddr": "10.0.0.2", 00:16:50.184 "trsvcid": "4420" 00:16:50.184 }, 00:16:50.184 "peer_address": { 00:16:50.184 "trtype": "TCP", 00:16:50.184 "adrfam": "IPv4", 00:16:50.184 "traddr": "10.0.0.1", 00:16:50.184 "trsvcid": "42912" 00:16:50.184 }, 00:16:50.184 "auth": { 00:16:50.184 "state": "completed", 00:16:50.184 "digest": "sha384", 00:16:50.184 "dhgroup": "ffdhe4096" 00:16:50.184 } 00:16:50.184 } 00:16:50.184 ]' 00:16:50.184 08:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:50.184 08:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:50.184 08:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:50.184 08:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:50.184 08:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:50.443 08:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.443 08:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.443 08:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.701 08:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-secret DHHC-1:01:NzQ0NzIxYmI0MDRjYjI1MjMzYzJlMmRiNWFlZDFjNjXiFtJQ: --dhchap-ctrl-secret DHHC-1:02:MmVmMmUyZjcxYmMyNjdlNGJiNTg0Yjc2ZTM5MWRkNGQxZTQ1Y2MzNTQwNmViOGQ58n44LA==: 00:16:51.267 08:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.267 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.267 08:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:16:51.267 08:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.267 08:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.267 08:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.267 08:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:51.267 08:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:51.267 08:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:51.525 08:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:16:51.525 08:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:51.525 08:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:51.525 08:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:51.525 08:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:51.525 08:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:51.525 08:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:51.526 08:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.526 08:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.526 08:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.526 08:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:51.526 08:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.092 00:16:52.092 08:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:52.092 08:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:52.092 08:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.350 08:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.350 08:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.350 08:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.350 08:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.350 08:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.350 08:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:52.350 { 00:16:52.350 "cntlid": 77, 00:16:52.350 "qid": 0, 00:16:52.350 "state": "enabled", 00:16:52.350 "thread": "nvmf_tgt_poll_group_000", 00:16:52.350 "listen_address": { 00:16:52.350 "trtype": "TCP", 00:16:52.350 "adrfam": "IPv4", 00:16:52.350 "traddr": "10.0.0.2", 00:16:52.350 "trsvcid": "4420" 00:16:52.350 }, 00:16:52.350 "peer_address": { 00:16:52.350 "trtype": "TCP", 00:16:52.350 "adrfam": "IPv4", 00:16:52.350 "traddr": "10.0.0.1", 00:16:52.350 "trsvcid": "42942" 00:16:52.350 }, 00:16:52.350 "auth": { 00:16:52.350 "state": "completed", 00:16:52.350 "digest": "sha384", 00:16:52.350 "dhgroup": "ffdhe4096" 00:16:52.350 } 00:16:52.350 } 00:16:52.350 ]' 00:16:52.350 08:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:52.350 08:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:52.350 08:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:52.350 08:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:52.350 08:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:52.351 08:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.351 08:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.351 08:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.609 08:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-secret DHHC-1:02:YmZlODA0YzdjMGI1ZjgwMjgyZDA3ZGJlYWM3MjAyN2RmMDZmMmJiOTIzY2U3MmYw8KNWIw==: --dhchap-ctrl-secret DHHC-1:01:YWI5ZjhlNjA2NjExMGY3ZTk2ODdhYTE3MTRkODk2MTL2aiAd: 00:16:53.543 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.543 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.543 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:16:53.543 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.543 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.543 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.543 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:53.543 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:53.543 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:53.801 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:16:53.801 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:53.801 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:53.801 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:53.801 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:53.801 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.801 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-key key3 00:16:53.801 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.801 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.801 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.801 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:53.801 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:54.059 00:16:54.059 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:54.059 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.059 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:54.317 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.317 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.317 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.317 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.575 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.575 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:54.575 { 00:16:54.575 "cntlid": 79, 00:16:54.575 "qid": 0, 00:16:54.575 "state": "enabled", 00:16:54.575 "thread": "nvmf_tgt_poll_group_000", 00:16:54.575 "listen_address": { 00:16:54.575 "trtype": "TCP", 00:16:54.575 "adrfam": "IPv4", 00:16:54.575 "traddr": "10.0.0.2", 00:16:54.575 "trsvcid": "4420" 00:16:54.575 }, 00:16:54.575 "peer_address": { 00:16:54.575 "trtype": "TCP", 00:16:54.575 "adrfam": "IPv4", 00:16:54.575 "traddr": "10.0.0.1", 00:16:54.575 "trsvcid": "42964" 00:16:54.575 }, 00:16:54.575 "auth": { 00:16:54.575 "state": "completed", 00:16:54.575 "digest": "sha384", 00:16:54.575 "dhgroup": "ffdhe4096" 00:16:54.575 } 00:16:54.575 } 00:16:54.575 ]' 00:16:54.575 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:54.575 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:54.575 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:54.575 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:54.575 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:54.575 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.575 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.575 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.833 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-secret DHHC-1:03:YjliYzc2OGRhM2Y1YjI4MzAyOGI2NjVmOGRkZmQ3MmQ1NjRhYTEzNDVmNjk2ZWJkOWYzMzk5ODE2M2RkOTI1MlpGH1c=: 00:16:55.399 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.399 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.399 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:16:55.399 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.399 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.399 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.399 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:55.399 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:55.399 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:55.399 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:55.675 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:16:55.675 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:55.675 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:55.675 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:55.675 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:55.675 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.675 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.675 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.675 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.675 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.675 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.675 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.244 00:16:56.244 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:56.244 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.244 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:56.502 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.502 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.502 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.502 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.502 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.502 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:56.502 { 00:16:56.502 "cntlid": 81, 00:16:56.502 "qid": 0, 00:16:56.502 "state": "enabled", 00:16:56.502 "thread": "nvmf_tgt_poll_group_000", 00:16:56.502 "listen_address": { 00:16:56.502 "trtype": "TCP", 00:16:56.502 "adrfam": "IPv4", 00:16:56.502 "traddr": "10.0.0.2", 00:16:56.502 "trsvcid": "4420" 00:16:56.502 }, 00:16:56.502 "peer_address": { 00:16:56.502 "trtype": "TCP", 00:16:56.502 "adrfam": "IPv4", 00:16:56.502 "traddr": "10.0.0.1", 00:16:56.502 "trsvcid": "55386" 00:16:56.502 }, 00:16:56.502 "auth": { 00:16:56.502 "state": "completed", 00:16:56.502 "digest": "sha384", 00:16:56.502 "dhgroup": "ffdhe6144" 00:16:56.502 } 00:16:56.502 } 00:16:56.502 ]' 00:16:56.502 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:56.502 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:56.502 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:56.502 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:56.502 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:56.760 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.760 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.760 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.018 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-secret DHHC-1:00:MjlmNzExODE4NTc2MmNmYWU0ZjYwMGI2ZjA4YTcyMjRjNGFmOGNmMTI4ZGQ5NTEyJWIzmQ==: --dhchap-ctrl-secret DHHC-1:03:NWE1MzkzODA0NDk2YThjNTlkNDYxNjBhZTliMDlkM2I2MTY3NjFmODQxMDU5MDU4ZjFhNDMxNzg4MDVlOWZjZEEK018=: 00:16:57.585 08:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.585 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.585 08:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:16:57.585 08:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.585 08:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.585 08:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.585 08:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:57.585 08:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:57.585 08:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:57.843 08:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:16:57.843 08:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:57.843 08:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:57.843 08:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:57.843 08:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:57.843 08:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.843 08:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.843 08:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.843 08:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.101 08:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.101 08:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.101 08:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.359 00:16:58.359 08:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:58.359 08:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:58.359 08:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.664 08:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.664 08:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.664 08:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.664 08:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.664 08:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.664 08:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:58.664 { 00:16:58.664 "cntlid": 83, 00:16:58.664 "qid": 0, 00:16:58.664 "state": "enabled", 00:16:58.664 "thread": "nvmf_tgt_poll_group_000", 00:16:58.664 "listen_address": { 00:16:58.664 "trtype": "TCP", 00:16:58.664 "adrfam": "IPv4", 00:16:58.664 "traddr": "10.0.0.2", 00:16:58.664 "trsvcid": "4420" 00:16:58.664 }, 00:16:58.664 "peer_address": { 00:16:58.664 "trtype": "TCP", 00:16:58.664 "adrfam": "IPv4", 00:16:58.664 "traddr": "10.0.0.1", 00:16:58.664 "trsvcid": "55416" 00:16:58.664 }, 00:16:58.664 "auth": { 00:16:58.664 "state": "completed", 00:16:58.664 "digest": "sha384", 00:16:58.664 "dhgroup": "ffdhe6144" 00:16:58.664 } 00:16:58.664 } 00:16:58.664 ]' 00:16:58.664 08:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:58.923 08:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:58.923 08:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:58.923 08:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:58.923 08:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:58.923 08:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.923 08:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.923 08:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.180 08:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-secret DHHC-1:01:NzQ0NzIxYmI0MDRjYjI1MjMzYzJlMmRiNWFlZDFjNjXiFtJQ: --dhchap-ctrl-secret DHHC-1:02:MmVmMmUyZjcxYmMyNjdlNGJiNTg0Yjc2ZTM5MWRkNGQxZTQ1Y2MzNTQwNmViOGQ58n44LA==: 00:17:00.115 08:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.116 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.116 08:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:17:00.116 08:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.116 08:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.116 08:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.116 08:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:00.116 08:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:00.116 08:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:00.375 08:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:17:00.375 08:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:00.375 08:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:00.375 08:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:00.375 08:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:00.375 08:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.375 08:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.375 08:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.375 08:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.375 08:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.375 08:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.375 08:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.633 00:17:00.633 08:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:00.633 08:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.633 08:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:00.891 08:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.891 08:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.891 08:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.891 08:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.891 08:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.891 08:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:00.891 { 00:17:00.891 "cntlid": 85, 00:17:00.891 "qid": 0, 00:17:00.891 "state": "enabled", 00:17:00.891 "thread": "nvmf_tgt_poll_group_000", 00:17:00.891 "listen_address": { 00:17:00.891 "trtype": "TCP", 00:17:00.891 "adrfam": "IPv4", 00:17:00.891 "traddr": "10.0.0.2", 00:17:00.891 "trsvcid": "4420" 00:17:00.891 }, 00:17:00.891 "peer_address": { 00:17:00.891 "trtype": "TCP", 00:17:00.891 "adrfam": "IPv4", 00:17:00.891 "traddr": "10.0.0.1", 00:17:00.891 "trsvcid": "55446" 00:17:00.891 }, 00:17:00.891 "auth": { 00:17:00.891 "state": "completed", 00:17:00.891 "digest": "sha384", 00:17:00.891 "dhgroup": "ffdhe6144" 00:17:00.891 } 00:17:00.891 } 00:17:00.891 ]' 00:17:00.891 08:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:01.149 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:01.150 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:01.150 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:01.150 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:01.150 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.150 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.150 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.408 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-secret DHHC-1:02:YmZlODA0YzdjMGI1ZjgwMjgyZDA3ZGJlYWM3MjAyN2RmMDZmMmJiOTIzY2U3MmYw8KNWIw==: --dhchap-ctrl-secret DHHC-1:01:YWI5ZjhlNjA2NjExMGY3ZTk2ODdhYTE3MTRkODk2MTL2aiAd: 00:17:01.974 08:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.974 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.974 08:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:17:01.974 08:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.974 08:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.232 08:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.232 08:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:02.232 08:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:02.232 08:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:02.490 08:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:17:02.490 08:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:02.490 08:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:02.490 08:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:02.490 08:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:02.490 08:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.490 08:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-key key3 00:17:02.490 08:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.490 08:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.490 08:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.490 08:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:02.490 08:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:03.057 00:17:03.057 08:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:03.057 08:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.057 08:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:03.057 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.057 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.057 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.057 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.057 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.057 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:03.057 { 00:17:03.057 "cntlid": 87, 00:17:03.057 "qid": 0, 00:17:03.057 "state": "enabled", 00:17:03.057 "thread": "nvmf_tgt_poll_group_000", 00:17:03.057 "listen_address": { 00:17:03.057 "trtype": "TCP", 00:17:03.057 "adrfam": "IPv4", 00:17:03.057 "traddr": "10.0.0.2", 00:17:03.057 "trsvcid": "4420" 00:17:03.057 }, 00:17:03.057 "peer_address": { 00:17:03.057 "trtype": "TCP", 00:17:03.057 "adrfam": "IPv4", 00:17:03.057 "traddr": "10.0.0.1", 00:17:03.057 "trsvcid": "55484" 00:17:03.057 }, 00:17:03.057 "auth": { 00:17:03.057 "state": "completed", 00:17:03.057 "digest": "sha384", 00:17:03.057 "dhgroup": "ffdhe6144" 00:17:03.057 } 00:17:03.057 } 00:17:03.057 ]' 00:17:03.057 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:03.316 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:03.316 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:03.316 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:03.316 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:03.316 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.316 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.317 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.575 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-secret DHHC-1:03:YjliYzc2OGRhM2Y1YjI4MzAyOGI2NjVmOGRkZmQ3MmQ1NjRhYTEzNDVmNjk2ZWJkOWYzMzk5ODE2M2RkOTI1MlpGH1c=: 00:17:04.510 08:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.510 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.510 08:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:17:04.510 08:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.510 08:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.510 08:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.510 08:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:04.510 08:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:04.510 08:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:04.510 08:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:04.510 08:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:17:04.510 08:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:04.510 08:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:04.510 08:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:04.510 08:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:04.510 08:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:04.510 08:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:04.510 08:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.510 08:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.510 08:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.510 08:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:04.510 08:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.084 00:17:05.084 08:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:05.084 08:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:05.084 08:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.358 08:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.358 08:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:05.358 08:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.358 08:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.358 08:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.358 08:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:05.358 { 00:17:05.358 "cntlid": 89, 00:17:05.358 "qid": 0, 00:17:05.358 "state": "enabled", 00:17:05.358 "thread": "nvmf_tgt_poll_group_000", 00:17:05.358 "listen_address": { 00:17:05.358 "trtype": "TCP", 00:17:05.358 "adrfam": "IPv4", 00:17:05.358 "traddr": "10.0.0.2", 00:17:05.358 "trsvcid": "4420" 00:17:05.358 }, 00:17:05.358 "peer_address": { 00:17:05.358 "trtype": "TCP", 00:17:05.358 "adrfam": "IPv4", 00:17:05.358 "traddr": "10.0.0.1", 00:17:05.358 "trsvcid": "55518" 00:17:05.358 }, 00:17:05.358 "auth": { 00:17:05.358 "state": "completed", 00:17:05.358 "digest": "sha384", 00:17:05.358 "dhgroup": "ffdhe8192" 00:17:05.358 } 00:17:05.358 } 00:17:05.358 ]' 00:17:05.358 08:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:05.616 08:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:05.616 08:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:05.616 08:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:05.616 08:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:05.616 08:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.616 08:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.616 08:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.873 08:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-secret DHHC-1:00:MjlmNzExODE4NTc2MmNmYWU0ZjYwMGI2ZjA4YTcyMjRjNGFmOGNmMTI4ZGQ5NTEyJWIzmQ==: --dhchap-ctrl-secret DHHC-1:03:NWE1MzkzODA0NDk2YThjNTlkNDYxNjBhZTliMDlkM2I2MTY3NjFmODQxMDU5MDU4ZjFhNDMxNzg4MDVlOWZjZEEK018=: 00:17:06.438 08:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.438 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.438 08:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:17:06.438 08:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.438 08:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.438 08:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.438 08:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:06.438 08:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:06.438 08:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:06.695 08:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:17:06.695 08:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:06.695 08:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:06.695 08:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:06.695 08:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:06.695 08:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.695 08:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:06.695 08:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.696 08:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.954 08:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.954 08:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:06.954 08:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.519 00:17:07.519 08:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:07.519 08:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:07.519 08:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.777 08:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.777 08:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.777 08:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.777 08:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.777 08:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.777 08:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:07.777 { 00:17:07.777 "cntlid": 91, 00:17:07.777 "qid": 0, 00:17:07.777 "state": "enabled", 00:17:07.777 "thread": "nvmf_tgt_poll_group_000", 00:17:07.777 "listen_address": { 00:17:07.778 "trtype": "TCP", 00:17:07.778 "adrfam": "IPv4", 00:17:07.778 "traddr": "10.0.0.2", 00:17:07.778 "trsvcid": "4420" 00:17:07.778 }, 00:17:07.778 "peer_address": { 00:17:07.778 "trtype": "TCP", 00:17:07.778 "adrfam": "IPv4", 00:17:07.778 "traddr": "10.0.0.1", 00:17:07.778 "trsvcid": "47948" 00:17:07.778 }, 00:17:07.778 "auth": { 00:17:07.778 "state": "completed", 00:17:07.778 "digest": "sha384", 00:17:07.778 "dhgroup": "ffdhe8192" 00:17:07.778 } 00:17:07.778 } 00:17:07.778 ]' 00:17:07.778 08:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:07.778 08:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:07.778 08:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:07.778 08:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:07.778 08:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:07.778 08:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.778 08:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.778 08:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.343 08:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-secret DHHC-1:01:NzQ0NzIxYmI0MDRjYjI1MjMzYzJlMmRiNWFlZDFjNjXiFtJQ: --dhchap-ctrl-secret DHHC-1:02:MmVmMmUyZjcxYmMyNjdlNGJiNTg0Yjc2ZTM5MWRkNGQxZTQ1Y2MzNTQwNmViOGQ58n44LA==: 00:17:08.908 08:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.908 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.908 08:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:17:08.908 08:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.908 08:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.909 08:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.909 08:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:08.909 08:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:08.909 08:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:09.167 08:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:17:09.167 08:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:09.167 08:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:09.167 08:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:09.167 08:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:09.167 08:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.167 08:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:09.167 08:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.167 08:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.167 08:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.167 08:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:09.167 08:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:09.732 00:17:09.732 08:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:09.732 08:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.732 08:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:09.989 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.989 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.989 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.989 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.989 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.989 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:09.989 { 00:17:09.989 "cntlid": 93, 00:17:09.989 "qid": 0, 00:17:09.989 "state": "enabled", 00:17:09.989 "thread": "nvmf_tgt_poll_group_000", 00:17:09.989 "listen_address": { 00:17:09.989 "trtype": "TCP", 00:17:09.989 "adrfam": "IPv4", 00:17:09.989 "traddr": "10.0.0.2", 00:17:09.989 "trsvcid": "4420" 00:17:09.989 }, 00:17:09.989 "peer_address": { 00:17:09.989 "trtype": "TCP", 00:17:09.989 "adrfam": "IPv4", 00:17:09.989 "traddr": "10.0.0.1", 00:17:09.989 "trsvcid": "47962" 00:17:09.989 }, 00:17:09.989 "auth": { 00:17:09.989 "state": "completed", 00:17:09.989 "digest": "sha384", 00:17:09.989 "dhgroup": "ffdhe8192" 00:17:09.989 } 00:17:09.989 } 00:17:09.989 ]' 00:17:09.990 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:10.247 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:10.247 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:10.247 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:10.247 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:10.247 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.247 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.247 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.505 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-secret DHHC-1:02:YmZlODA0YzdjMGI1ZjgwMjgyZDA3ZGJlYWM3MjAyN2RmMDZmMmJiOTIzY2U3MmYw8KNWIw==: --dhchap-ctrl-secret DHHC-1:01:YWI5ZjhlNjA2NjExMGY3ZTk2ODdhYTE3MTRkODk2MTL2aiAd: 00:17:11.071 08:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.071 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.071 08:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:17:11.071 08:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.071 08:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.071 08:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.071 08:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:11.071 08:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:11.071 08:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:11.329 08:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:17:11.329 08:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:11.329 08:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:11.329 08:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:11.329 08:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:11.329 08:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.329 08:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-key key3 00:17:11.329 08:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.329 08:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.329 08:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.329 08:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:11.329 08:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:12.264 00:17:12.264 08:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:12.264 08:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.264 08:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:12.264 08:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.264 08:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.264 08:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.264 08:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.264 08:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.264 08:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:12.264 { 00:17:12.264 "cntlid": 95, 00:17:12.264 "qid": 0, 00:17:12.264 "state": "enabled", 00:17:12.264 "thread": "nvmf_tgt_poll_group_000", 00:17:12.264 "listen_address": { 00:17:12.264 "trtype": "TCP", 00:17:12.264 "adrfam": "IPv4", 00:17:12.264 "traddr": "10.0.0.2", 00:17:12.264 "trsvcid": "4420" 00:17:12.264 }, 00:17:12.264 "peer_address": { 00:17:12.264 "trtype": "TCP", 00:17:12.264 "adrfam": "IPv4", 00:17:12.264 "traddr": "10.0.0.1", 00:17:12.264 "trsvcid": "47982" 00:17:12.264 }, 00:17:12.264 "auth": { 00:17:12.264 "state": "completed", 00:17:12.264 "digest": "sha384", 00:17:12.264 "dhgroup": "ffdhe8192" 00:17:12.264 } 00:17:12.264 } 00:17:12.264 ]' 00:17:12.264 08:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:12.264 08:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:12.264 08:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:12.522 08:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:12.522 08:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:12.522 08:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.522 08:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.522 08:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.780 08:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-secret DHHC-1:03:YjliYzc2OGRhM2Y1YjI4MzAyOGI2NjVmOGRkZmQ3MmQ1NjRhYTEzNDVmNjk2ZWJkOWYzMzk5ODE2M2RkOTI1MlpGH1c=: 00:17:13.345 08:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.345 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.345 08:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:17:13.345 08:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.345 08:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.345 08:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.345 08:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:13.345 08:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:13.345 08:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:13.345 08:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:13.345 08:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:13.604 08:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:17:13.604 08:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:13.604 08:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:13.604 08:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:13.604 08:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:13.604 08:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.604 08:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.604 08:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.604 08:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.604 08:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.604 08:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.604 08:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.862 00:17:14.120 08:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:14.120 08:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:14.120 08:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.120 08:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.120 08:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:14.120 08:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.120 08:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.120 08:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.120 08:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:14.120 { 00:17:14.120 "cntlid": 97, 00:17:14.120 "qid": 0, 00:17:14.120 "state": "enabled", 00:17:14.120 "thread": "nvmf_tgt_poll_group_000", 00:17:14.120 "listen_address": { 00:17:14.120 "trtype": "TCP", 00:17:14.120 "adrfam": "IPv4", 00:17:14.120 "traddr": "10.0.0.2", 00:17:14.120 "trsvcid": "4420" 00:17:14.120 }, 00:17:14.120 "peer_address": { 00:17:14.120 "trtype": "TCP", 00:17:14.120 "adrfam": "IPv4", 00:17:14.120 "traddr": "10.0.0.1", 00:17:14.120 "trsvcid": "48016" 00:17:14.120 }, 00:17:14.120 "auth": { 00:17:14.120 "state": "completed", 00:17:14.120 "digest": "sha512", 00:17:14.120 "dhgroup": "null" 00:17:14.120 } 00:17:14.120 } 00:17:14.120 ]' 00:17:14.120 08:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:14.452 08:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:14.452 08:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:14.452 08:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:14.452 08:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:14.452 08:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.452 08:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.452 08:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.711 08:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-secret DHHC-1:00:MjlmNzExODE4NTc2MmNmYWU0ZjYwMGI2ZjA4YTcyMjRjNGFmOGNmMTI4ZGQ5NTEyJWIzmQ==: --dhchap-ctrl-secret DHHC-1:03:NWE1MzkzODA0NDk2YThjNTlkNDYxNjBhZTliMDlkM2I2MTY3NjFmODQxMDU5MDU4ZjFhNDMxNzg4MDVlOWZjZEEK018=: 00:17:15.647 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.647 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.647 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:17:15.647 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.647 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.647 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.647 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:15.647 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:15.647 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:15.647 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:17:15.647 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:15.647 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:15.647 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:15.647 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:15.647 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.647 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.647 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.647 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.647 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.647 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.647 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.215 00:17:16.215 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:16.215 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:16.215 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.215 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.215 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.215 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.215 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.215 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.215 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:16.215 { 00:17:16.215 "cntlid": 99, 00:17:16.215 "qid": 0, 00:17:16.215 "state": "enabled", 00:17:16.215 "thread": "nvmf_tgt_poll_group_000", 00:17:16.215 "listen_address": { 00:17:16.215 "trtype": "TCP", 00:17:16.215 "adrfam": "IPv4", 00:17:16.215 "traddr": "10.0.0.2", 00:17:16.215 "trsvcid": "4420" 00:17:16.215 }, 00:17:16.215 "peer_address": { 00:17:16.215 "trtype": "TCP", 00:17:16.215 "adrfam": "IPv4", 00:17:16.215 "traddr": "10.0.0.1", 00:17:16.215 "trsvcid": "42432" 00:17:16.215 }, 00:17:16.215 "auth": { 00:17:16.215 "state": "completed", 00:17:16.215 "digest": "sha512", 00:17:16.215 "dhgroup": "null" 00:17:16.215 } 00:17:16.215 } 00:17:16.215 ]' 00:17:16.215 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:16.480 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:16.480 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:16.480 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:16.480 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:16.480 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.480 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.480 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.737 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-secret DHHC-1:01:NzQ0NzIxYmI0MDRjYjI1MjMzYzJlMmRiNWFlZDFjNjXiFtJQ: --dhchap-ctrl-secret DHHC-1:02:MmVmMmUyZjcxYmMyNjdlNGJiNTg0Yjc2ZTM5MWRkNGQxZTQ1Y2MzNTQwNmViOGQ58n44LA==: 00:17:17.303 08:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.303 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.303 08:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:17:17.303 08:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.303 08:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.303 08:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.303 08:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:17.303 08:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:17.303 08:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:17.562 08:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:17:17.562 08:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:17.562 08:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:17.562 08:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:17.562 08:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:17.562 08:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.562 08:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.562 08:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.562 08:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.562 08:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.562 08:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.562 08:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:18.129 00:17:18.129 08:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:18.129 08:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:18.129 08:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.388 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.388 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.388 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.388 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.388 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.388 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:18.388 { 00:17:18.388 "cntlid": 101, 00:17:18.388 "qid": 0, 00:17:18.388 "state": "enabled", 00:17:18.388 "thread": "nvmf_tgt_poll_group_000", 00:17:18.388 "listen_address": { 00:17:18.388 "trtype": "TCP", 00:17:18.388 "adrfam": "IPv4", 00:17:18.388 "traddr": "10.0.0.2", 00:17:18.388 "trsvcid": "4420" 00:17:18.388 }, 00:17:18.388 "peer_address": { 00:17:18.388 "trtype": "TCP", 00:17:18.388 "adrfam": "IPv4", 00:17:18.388 "traddr": "10.0.0.1", 00:17:18.388 "trsvcid": "42462" 00:17:18.388 }, 00:17:18.388 "auth": { 00:17:18.388 "state": "completed", 00:17:18.388 "digest": "sha512", 00:17:18.388 "dhgroup": "null" 00:17:18.388 } 00:17:18.388 } 00:17:18.388 ]' 00:17:18.388 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:18.388 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:18.388 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:18.388 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:18.388 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:18.388 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.388 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.388 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.646 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-secret DHHC-1:02:YmZlODA0YzdjMGI1ZjgwMjgyZDA3ZGJlYWM3MjAyN2RmMDZmMmJiOTIzY2U3MmYw8KNWIw==: --dhchap-ctrl-secret DHHC-1:01:YWI5ZjhlNjA2NjExMGY3ZTk2ODdhYTE3MTRkODk2MTL2aiAd: 00:17:19.580 08:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.580 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.580 08:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:17:19.580 08:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.580 08:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.580 08:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.580 08:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:19.580 08:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:19.580 08:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:19.838 08:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:17:19.838 08:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:19.838 08:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:19.838 08:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:19.838 08:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:19.838 08:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.838 08:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-key key3 00:17:19.838 08:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.839 08:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.839 08:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.839 08:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:19.839 08:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:20.098 00:17:20.098 08:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:20.098 08:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:20.098 08:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.356 08:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.356 08:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.356 08:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.356 08:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.356 08:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.356 08:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:20.356 { 00:17:20.356 "cntlid": 103, 00:17:20.356 "qid": 0, 00:17:20.356 "state": "enabled", 00:17:20.356 "thread": "nvmf_tgt_poll_group_000", 00:17:20.356 "listen_address": { 00:17:20.356 "trtype": "TCP", 00:17:20.356 "adrfam": "IPv4", 00:17:20.356 "traddr": "10.0.0.2", 00:17:20.356 "trsvcid": "4420" 00:17:20.356 }, 00:17:20.356 "peer_address": { 00:17:20.356 "trtype": "TCP", 00:17:20.356 "adrfam": "IPv4", 00:17:20.357 "traddr": "10.0.0.1", 00:17:20.357 "trsvcid": "42502" 00:17:20.357 }, 00:17:20.357 "auth": { 00:17:20.357 "state": "completed", 00:17:20.357 "digest": "sha512", 00:17:20.357 "dhgroup": "null" 00:17:20.357 } 00:17:20.357 } 00:17:20.357 ]' 00:17:20.357 08:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:20.357 08:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:20.357 08:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:20.357 08:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:20.357 08:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:20.614 08:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.614 08:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.615 08:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.873 08:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-secret DHHC-1:03:YjliYzc2OGRhM2Y1YjI4MzAyOGI2NjVmOGRkZmQ3MmQ1NjRhYTEzNDVmNjk2ZWJkOWYzMzk5ODE2M2RkOTI1MlpGH1c=: 00:17:21.440 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.440 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.440 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:17:21.440 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.440 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.440 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.440 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:21.440 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:21.440 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:21.440 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:21.698 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:17:21.698 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:21.698 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:21.698 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:21.698 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:21.699 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:21.699 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.699 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.699 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.699 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.699 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.699 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.265 00:17:22.265 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:22.265 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.265 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:22.265 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.523 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.523 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.523 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.523 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.523 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:22.523 { 00:17:22.523 "cntlid": 105, 00:17:22.523 "qid": 0, 00:17:22.523 "state": "enabled", 00:17:22.523 "thread": "nvmf_tgt_poll_group_000", 00:17:22.523 "listen_address": { 00:17:22.523 "trtype": "TCP", 00:17:22.523 "adrfam": "IPv4", 00:17:22.523 "traddr": "10.0.0.2", 00:17:22.523 "trsvcid": "4420" 00:17:22.523 }, 00:17:22.523 "peer_address": { 00:17:22.523 "trtype": "TCP", 00:17:22.523 "adrfam": "IPv4", 00:17:22.523 "traddr": "10.0.0.1", 00:17:22.523 "trsvcid": "42518" 00:17:22.523 }, 00:17:22.523 "auth": { 00:17:22.523 "state": "completed", 00:17:22.523 "digest": "sha512", 00:17:22.523 "dhgroup": "ffdhe2048" 00:17:22.523 } 00:17:22.523 } 00:17:22.523 ]' 00:17:22.523 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:22.523 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:22.523 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:22.523 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:22.523 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:22.523 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.524 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.524 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.782 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-secret DHHC-1:00:MjlmNzExODE4NTc2MmNmYWU0ZjYwMGI2ZjA4YTcyMjRjNGFmOGNmMTI4ZGQ5NTEyJWIzmQ==: --dhchap-ctrl-secret DHHC-1:03:NWE1MzkzODA0NDk2YThjNTlkNDYxNjBhZTliMDlkM2I2MTY3NjFmODQxMDU5MDU4ZjFhNDMxNzg4MDVlOWZjZEEK018=: 00:17:23.717 08:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.717 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.717 08:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:17:23.717 08:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.717 08:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.717 08:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.717 08:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:23.717 08:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:23.717 08:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:23.717 08:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:17:23.717 08:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:23.717 08:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:23.717 08:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:23.717 08:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:23.717 08:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:23.717 08:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:23.717 08:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.717 08:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.717 08:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.717 08:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:23.717 08:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.283 00:17:24.283 08:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:24.283 08:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.283 08:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:24.542 08:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.542 08:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.542 08:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.542 08:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.542 08:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.542 08:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:24.542 { 00:17:24.542 "cntlid": 107, 00:17:24.542 "qid": 0, 00:17:24.542 "state": "enabled", 00:17:24.542 "thread": "nvmf_tgt_poll_group_000", 00:17:24.542 "listen_address": { 00:17:24.542 "trtype": "TCP", 00:17:24.542 "adrfam": "IPv4", 00:17:24.542 "traddr": "10.0.0.2", 00:17:24.542 "trsvcid": "4420" 00:17:24.542 }, 00:17:24.542 "peer_address": { 00:17:24.542 "trtype": "TCP", 00:17:24.542 "adrfam": "IPv4", 00:17:24.542 "traddr": "10.0.0.1", 00:17:24.542 "trsvcid": "42530" 00:17:24.542 }, 00:17:24.542 "auth": { 00:17:24.542 "state": "completed", 00:17:24.542 "digest": "sha512", 00:17:24.542 "dhgroup": "ffdhe2048" 00:17:24.542 } 00:17:24.542 } 00:17:24.542 ]' 00:17:24.542 08:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:24.542 08:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:24.542 08:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:24.542 08:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:24.542 08:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:24.542 08:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.542 08:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.542 08:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.800 08:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-secret DHHC-1:01:NzQ0NzIxYmI0MDRjYjI1MjMzYzJlMmRiNWFlZDFjNjXiFtJQ: --dhchap-ctrl-secret DHHC-1:02:MmVmMmUyZjcxYmMyNjdlNGJiNTg0Yjc2ZTM5MWRkNGQxZTQ1Y2MzNTQwNmViOGQ58n44LA==: 00:17:25.736 08:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.736 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.736 08:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:17:25.736 08:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.736 08:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.736 08:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.736 08:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:25.736 08:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:25.736 08:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:25.994 08:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:17:25.994 08:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:25.994 08:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:25.994 08:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:25.994 08:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:25.994 08:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.994 08:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:25.994 08:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.994 08:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.994 08:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.994 08:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:25.994 08:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.253 00:17:26.253 08:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:26.253 08:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:26.253 08:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.511 08:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.512 08:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.512 08:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.512 08:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.512 08:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.512 08:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:26.512 { 00:17:26.512 "cntlid": 109, 00:17:26.512 "qid": 0, 00:17:26.512 "state": "enabled", 00:17:26.512 "thread": "nvmf_tgt_poll_group_000", 00:17:26.512 "listen_address": { 00:17:26.512 "trtype": "TCP", 00:17:26.512 "adrfam": "IPv4", 00:17:26.512 "traddr": "10.0.0.2", 00:17:26.512 "trsvcid": "4420" 00:17:26.512 }, 00:17:26.512 "peer_address": { 00:17:26.512 "trtype": "TCP", 00:17:26.512 "adrfam": "IPv4", 00:17:26.512 "traddr": "10.0.0.1", 00:17:26.512 "trsvcid": "46502" 00:17:26.512 }, 00:17:26.512 "auth": { 00:17:26.512 "state": "completed", 00:17:26.512 "digest": "sha512", 00:17:26.512 "dhgroup": "ffdhe2048" 00:17:26.512 } 00:17:26.512 } 00:17:26.512 ]' 00:17:26.512 08:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:26.770 08:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:26.771 08:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:26.771 08:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:26.771 08:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:26.771 08:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.771 08:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.771 08:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.030 08:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-secret DHHC-1:02:YmZlODA0YzdjMGI1ZjgwMjgyZDA3ZGJlYWM3MjAyN2RmMDZmMmJiOTIzY2U3MmYw8KNWIw==: --dhchap-ctrl-secret DHHC-1:01:YWI5ZjhlNjA2NjExMGY3ZTk2ODdhYTE3MTRkODk2MTL2aiAd: 00:17:27.597 08:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.597 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.597 08:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:17:27.597 08:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.597 08:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.597 08:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.597 08:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:27.597 08:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:27.597 08:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:27.856 08:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:17:27.856 08:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:27.856 08:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:27.856 08:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:27.856 08:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:27.856 08:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.856 08:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-key key3 00:17:27.856 08:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.856 08:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.856 08:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.856 08:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:27.856 08:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:28.115 00:17:28.115 08:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:28.115 08:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:28.115 08:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.374 08:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.374 08:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.374 08:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.374 08:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.374 08:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.374 08:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:28.374 { 00:17:28.374 "cntlid": 111, 00:17:28.374 "qid": 0, 00:17:28.374 "state": "enabled", 00:17:28.374 "thread": "nvmf_tgt_poll_group_000", 00:17:28.374 "listen_address": { 00:17:28.374 "trtype": "TCP", 00:17:28.374 "adrfam": "IPv4", 00:17:28.374 "traddr": "10.0.0.2", 00:17:28.374 "trsvcid": "4420" 00:17:28.374 }, 00:17:28.374 "peer_address": { 00:17:28.374 "trtype": "TCP", 00:17:28.374 "adrfam": "IPv4", 00:17:28.374 "traddr": "10.0.0.1", 00:17:28.374 "trsvcid": "46538" 00:17:28.374 }, 00:17:28.374 "auth": { 00:17:28.374 "state": "completed", 00:17:28.374 "digest": "sha512", 00:17:28.374 "dhgroup": "ffdhe2048" 00:17:28.374 } 00:17:28.374 } 00:17:28.374 ]' 00:17:28.374 08:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:28.633 08:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:28.633 08:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:28.633 08:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:28.633 08:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:28.633 08:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.633 08:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.633 08:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.892 08:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-secret DHHC-1:03:YjliYzc2OGRhM2Y1YjI4MzAyOGI2NjVmOGRkZmQ3MmQ1NjRhYTEzNDVmNjk2ZWJkOWYzMzk5ODE2M2RkOTI1MlpGH1c=: 00:17:29.828 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.828 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.828 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:17:29.828 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.828 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.828 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.828 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:29.828 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:29.828 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:29.828 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:29.828 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:17:29.828 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:29.828 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:29.828 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:29.828 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:29.828 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.828 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:29.828 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.828 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.828 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.828 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:29.828 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:30.416 00:17:30.416 08:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:30.416 08:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.416 08:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:30.416 08:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.416 08:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.416 08:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.416 08:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.416 08:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.416 08:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:30.416 { 00:17:30.416 "cntlid": 113, 00:17:30.416 "qid": 0, 00:17:30.416 "state": "enabled", 00:17:30.416 "thread": "nvmf_tgt_poll_group_000", 00:17:30.416 "listen_address": { 00:17:30.416 "trtype": "TCP", 00:17:30.416 "adrfam": "IPv4", 00:17:30.416 "traddr": "10.0.0.2", 00:17:30.416 "trsvcid": "4420" 00:17:30.416 }, 00:17:30.416 "peer_address": { 00:17:30.416 "trtype": "TCP", 00:17:30.416 "adrfam": "IPv4", 00:17:30.416 "traddr": "10.0.0.1", 00:17:30.416 "trsvcid": "46552" 00:17:30.416 }, 00:17:30.416 "auth": { 00:17:30.416 "state": "completed", 00:17:30.416 "digest": "sha512", 00:17:30.416 "dhgroup": "ffdhe3072" 00:17:30.416 } 00:17:30.416 } 00:17:30.416 ]' 00:17:30.416 08:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:30.675 08:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:30.675 08:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:30.675 08:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:30.675 08:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:30.675 08:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.675 08:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.675 08:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.933 08:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-secret DHHC-1:00:MjlmNzExODE4NTc2MmNmYWU0ZjYwMGI2ZjA4YTcyMjRjNGFmOGNmMTI4ZGQ5NTEyJWIzmQ==: --dhchap-ctrl-secret DHHC-1:03:NWE1MzkzODA0NDk2YThjNTlkNDYxNjBhZTliMDlkM2I2MTY3NjFmODQxMDU5MDU4ZjFhNDMxNzg4MDVlOWZjZEEK018=: 00:17:31.501 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.501 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.501 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:17:31.501 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.501 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.501 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.501 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:31.501 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:31.501 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:31.759 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:17:31.759 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:31.759 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:31.759 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:31.759 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:31.759 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.759 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:31.759 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.759 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.759 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.759 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:31.759 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.326 00:17:32.326 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:32.326 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:32.326 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.585 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.585 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.585 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.585 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.585 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.585 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:32.585 { 00:17:32.585 "cntlid": 115, 00:17:32.585 "qid": 0, 00:17:32.585 "state": "enabled", 00:17:32.585 "thread": "nvmf_tgt_poll_group_000", 00:17:32.585 "listen_address": { 00:17:32.585 "trtype": "TCP", 00:17:32.585 "adrfam": "IPv4", 00:17:32.585 "traddr": "10.0.0.2", 00:17:32.585 "trsvcid": "4420" 00:17:32.585 }, 00:17:32.585 "peer_address": { 00:17:32.585 "trtype": "TCP", 00:17:32.585 "adrfam": "IPv4", 00:17:32.585 "traddr": "10.0.0.1", 00:17:32.585 "trsvcid": "46600" 00:17:32.585 }, 00:17:32.585 "auth": { 00:17:32.585 "state": "completed", 00:17:32.585 "digest": "sha512", 00:17:32.585 "dhgroup": "ffdhe3072" 00:17:32.585 } 00:17:32.585 } 00:17:32.585 ]' 00:17:32.585 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:32.585 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:32.585 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:32.585 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:32.585 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:32.585 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.585 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.586 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.845 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-secret DHHC-1:01:NzQ0NzIxYmI0MDRjYjI1MjMzYzJlMmRiNWFlZDFjNjXiFtJQ: --dhchap-ctrl-secret DHHC-1:02:MmVmMmUyZjcxYmMyNjdlNGJiNTg0Yjc2ZTM5MWRkNGQxZTQ1Y2MzNTQwNmViOGQ58n44LA==: 00:17:33.801 08:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.801 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.801 08:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:17:33.802 08:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.802 08:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.802 08:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.802 08:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:33.802 08:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:33.802 08:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:33.802 08:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:17:33.802 08:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:33.802 08:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:33.802 08:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:33.802 08:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:33.802 08:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.802 08:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.802 08:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.802 08:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.802 08:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.802 08:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.802 08:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.370 00:17:34.370 08:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:34.370 08:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:34.370 08:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.370 08:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.370 08:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.370 08:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.370 08:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.370 08:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.629 08:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:34.629 { 00:17:34.629 "cntlid": 117, 00:17:34.629 "qid": 0, 00:17:34.629 "state": "enabled", 00:17:34.629 "thread": "nvmf_tgt_poll_group_000", 00:17:34.629 "listen_address": { 00:17:34.629 "trtype": "TCP", 00:17:34.629 "adrfam": "IPv4", 00:17:34.629 "traddr": "10.0.0.2", 00:17:34.629 "trsvcid": "4420" 00:17:34.629 }, 00:17:34.629 "peer_address": { 00:17:34.629 "trtype": "TCP", 00:17:34.629 "adrfam": "IPv4", 00:17:34.629 "traddr": "10.0.0.1", 00:17:34.629 "trsvcid": "46636" 00:17:34.629 }, 00:17:34.629 "auth": { 00:17:34.629 "state": "completed", 00:17:34.629 "digest": "sha512", 00:17:34.629 "dhgroup": "ffdhe3072" 00:17:34.629 } 00:17:34.629 } 00:17:34.629 ]' 00:17:34.629 08:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:34.629 08:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:34.629 08:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:34.629 08:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:34.629 08:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:34.629 08:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.629 08:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.629 08:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.888 08:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-secret DHHC-1:02:YmZlODA0YzdjMGI1ZjgwMjgyZDA3ZGJlYWM3MjAyN2RmMDZmMmJiOTIzY2U3MmYw8KNWIw==: --dhchap-ctrl-secret DHHC-1:01:YWI5ZjhlNjA2NjExMGY3ZTk2ODdhYTE3MTRkODk2MTL2aiAd: 00:17:35.822 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.822 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.822 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:17:35.822 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.822 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.822 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.822 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:35.822 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:35.822 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:36.080 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:17:36.080 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:36.080 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:36.080 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:36.080 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:36.080 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:36.080 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-key key3 00:17:36.080 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.080 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.080 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.080 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:36.080 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:36.354 00:17:36.354 08:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:36.354 08:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.354 08:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:36.612 08:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.612 08:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.612 08:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.612 08:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.612 08:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.612 08:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:36.612 { 00:17:36.612 "cntlid": 119, 00:17:36.612 "qid": 0, 00:17:36.612 "state": "enabled", 00:17:36.612 "thread": "nvmf_tgt_poll_group_000", 00:17:36.612 "listen_address": { 00:17:36.612 "trtype": "TCP", 00:17:36.612 "adrfam": "IPv4", 00:17:36.612 "traddr": "10.0.0.2", 00:17:36.612 "trsvcid": "4420" 00:17:36.612 }, 00:17:36.612 "peer_address": { 00:17:36.612 "trtype": "TCP", 00:17:36.612 "adrfam": "IPv4", 00:17:36.612 "traddr": "10.0.0.1", 00:17:36.612 "trsvcid": "49230" 00:17:36.612 }, 00:17:36.612 "auth": { 00:17:36.612 "state": "completed", 00:17:36.612 "digest": "sha512", 00:17:36.612 "dhgroup": "ffdhe3072" 00:17:36.612 } 00:17:36.612 } 00:17:36.612 ]' 00:17:36.612 08:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:36.612 08:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:36.871 08:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:36.871 08:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:36.871 08:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:36.871 08:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.871 08:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.871 08:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:37.129 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-secret DHHC-1:03:YjliYzc2OGRhM2Y1YjI4MzAyOGI2NjVmOGRkZmQ3MmQ1NjRhYTEzNDVmNjk2ZWJkOWYzMzk5ODE2M2RkOTI1MlpGH1c=: 00:17:37.697 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.697 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.697 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:17:37.697 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.697 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.697 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.697 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:37.697 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:37.697 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:37.697 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:37.955 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:17:37.955 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:37.955 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:37.955 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:37.955 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:37.955 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.955 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.955 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.955 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.955 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.955 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.955 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.521 00:17:38.521 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:38.521 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:38.521 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.779 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.779 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.779 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.779 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.779 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.779 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:38.779 { 00:17:38.779 "cntlid": 121, 00:17:38.779 "qid": 0, 00:17:38.780 "state": "enabled", 00:17:38.780 "thread": "nvmf_tgt_poll_group_000", 00:17:38.780 "listen_address": { 00:17:38.780 "trtype": "TCP", 00:17:38.780 "adrfam": "IPv4", 00:17:38.780 "traddr": "10.0.0.2", 00:17:38.780 "trsvcid": "4420" 00:17:38.780 }, 00:17:38.780 "peer_address": { 00:17:38.780 "trtype": "TCP", 00:17:38.780 "adrfam": "IPv4", 00:17:38.780 "traddr": "10.0.0.1", 00:17:38.780 "trsvcid": "49262" 00:17:38.780 }, 00:17:38.780 "auth": { 00:17:38.780 "state": "completed", 00:17:38.780 "digest": "sha512", 00:17:38.780 "dhgroup": "ffdhe4096" 00:17:38.780 } 00:17:38.780 } 00:17:38.780 ]' 00:17:38.780 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:38.780 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:38.780 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:38.780 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:38.780 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:38.780 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.780 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.780 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:39.038 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-secret DHHC-1:00:MjlmNzExODE4NTc2MmNmYWU0ZjYwMGI2ZjA4YTcyMjRjNGFmOGNmMTI4ZGQ5NTEyJWIzmQ==: --dhchap-ctrl-secret DHHC-1:03:NWE1MzkzODA0NDk2YThjNTlkNDYxNjBhZTliMDlkM2I2MTY3NjFmODQxMDU5MDU4ZjFhNDMxNzg4MDVlOWZjZEEK018=: 00:17:39.605 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:39.605 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:39.605 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:17:39.605 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.605 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.605 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.605 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:39.605 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:39.605 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:39.864 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:17:39.864 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:39.864 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:39.864 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:39.864 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:39.864 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.864 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.864 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.864 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.864 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.864 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.864 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.432 00:17:40.433 08:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:40.433 08:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:40.433 08:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:40.692 08:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.692 08:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:40.692 08:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.692 08:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.692 08:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.692 08:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:40.692 { 00:17:40.692 "cntlid": 123, 00:17:40.692 "qid": 0, 00:17:40.692 "state": "enabled", 00:17:40.692 "thread": "nvmf_tgt_poll_group_000", 00:17:40.692 "listen_address": { 00:17:40.692 "trtype": "TCP", 00:17:40.692 "adrfam": "IPv4", 00:17:40.692 "traddr": "10.0.0.2", 00:17:40.692 "trsvcid": "4420" 00:17:40.692 }, 00:17:40.692 "peer_address": { 00:17:40.692 "trtype": "TCP", 00:17:40.692 "adrfam": "IPv4", 00:17:40.692 "traddr": "10.0.0.1", 00:17:40.692 "trsvcid": "49292" 00:17:40.692 }, 00:17:40.692 "auth": { 00:17:40.692 "state": "completed", 00:17:40.692 "digest": "sha512", 00:17:40.692 "dhgroup": "ffdhe4096" 00:17:40.692 } 00:17:40.692 } 00:17:40.692 ]' 00:17:40.692 08:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:40.692 08:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:40.692 08:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:40.692 08:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:40.692 08:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:40.692 08:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:40.692 08:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:40.692 08:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.951 08:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-secret DHHC-1:01:NzQ0NzIxYmI0MDRjYjI1MjMzYzJlMmRiNWFlZDFjNjXiFtJQ: --dhchap-ctrl-secret DHHC-1:02:MmVmMmUyZjcxYmMyNjdlNGJiNTg0Yjc2ZTM5MWRkNGQxZTQ1Y2MzNTQwNmViOGQ58n44LA==: 00:17:41.518 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.518 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.518 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:17:41.518 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.518 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.795 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.795 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:41.795 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:41.795 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:41.795 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:17:41.795 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:41.795 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:41.795 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:41.795 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:41.795 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.795 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.795 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.795 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.060 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.060 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.060 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.319 00:17:42.319 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:42.319 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:42.319 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.578 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.578 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.578 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.578 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.578 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.578 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:42.578 { 00:17:42.578 "cntlid": 125, 00:17:42.578 "qid": 0, 00:17:42.578 "state": "enabled", 00:17:42.578 "thread": "nvmf_tgt_poll_group_000", 00:17:42.578 "listen_address": { 00:17:42.578 "trtype": "TCP", 00:17:42.578 "adrfam": "IPv4", 00:17:42.578 "traddr": "10.0.0.2", 00:17:42.578 "trsvcid": "4420" 00:17:42.578 }, 00:17:42.578 "peer_address": { 00:17:42.578 "trtype": "TCP", 00:17:42.578 "adrfam": "IPv4", 00:17:42.578 "traddr": "10.0.0.1", 00:17:42.578 "trsvcid": "49314" 00:17:42.578 }, 00:17:42.578 "auth": { 00:17:42.578 "state": "completed", 00:17:42.578 "digest": "sha512", 00:17:42.578 "dhgroup": "ffdhe4096" 00:17:42.578 } 00:17:42.578 } 00:17:42.578 ]' 00:17:42.578 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:42.578 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:42.578 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:42.578 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:42.578 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:42.837 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.837 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.837 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.094 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-secret DHHC-1:02:YmZlODA0YzdjMGI1ZjgwMjgyZDA3ZGJlYWM3MjAyN2RmMDZmMmJiOTIzY2U3MmYw8KNWIw==: --dhchap-ctrl-secret DHHC-1:01:YWI5ZjhlNjA2NjExMGY3ZTk2ODdhYTE3MTRkODk2MTL2aiAd: 00:17:43.660 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.660 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.660 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:17:43.660 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.660 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.917 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.917 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:43.917 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:43.917 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:44.175 08:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:17:44.175 08:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:44.175 08:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:44.175 08:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:44.175 08:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:44.175 08:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:44.175 08:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-key key3 00:17:44.175 08:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.175 08:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.175 08:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.175 08:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:44.175 08:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:44.433 00:17:44.433 08:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:44.433 08:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:44.433 08:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.691 08:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.691 08:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.691 08:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.691 08:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.691 08:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.691 08:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:44.691 { 00:17:44.691 "cntlid": 127, 00:17:44.691 "qid": 0, 00:17:44.691 "state": "enabled", 00:17:44.691 "thread": "nvmf_tgt_poll_group_000", 00:17:44.691 "listen_address": { 00:17:44.691 "trtype": "TCP", 00:17:44.691 "adrfam": "IPv4", 00:17:44.691 "traddr": "10.0.0.2", 00:17:44.691 "trsvcid": "4420" 00:17:44.691 }, 00:17:44.691 "peer_address": { 00:17:44.691 "trtype": "TCP", 00:17:44.691 "adrfam": "IPv4", 00:17:44.691 "traddr": "10.0.0.1", 00:17:44.691 "trsvcid": "49338" 00:17:44.691 }, 00:17:44.691 "auth": { 00:17:44.691 "state": "completed", 00:17:44.691 "digest": "sha512", 00:17:44.691 "dhgroup": "ffdhe4096" 00:17:44.691 } 00:17:44.691 } 00:17:44.691 ]' 00:17:44.691 08:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:44.691 08:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:44.691 08:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:44.691 08:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:44.691 08:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:44.949 08:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.949 08:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.949 08:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:45.207 08:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-secret DHHC-1:03:YjliYzc2OGRhM2Y1YjI4MzAyOGI2NjVmOGRkZmQ3MmQ1NjRhYTEzNDVmNjk2ZWJkOWYzMzk5ODE2M2RkOTI1MlpGH1c=: 00:17:45.773 08:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.773 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.773 08:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:17:45.773 08:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.773 08:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.773 08:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.773 08:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:45.773 08:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:45.773 08:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:45.773 08:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:46.031 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:17:46.031 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:46.031 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:46.031 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:46.031 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:46.031 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:46.031 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.031 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.031 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.031 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.031 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.031 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.597 00:17:46.597 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:46.597 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.597 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:46.855 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.855 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.855 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.855 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.855 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.855 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:46.855 { 00:17:46.855 "cntlid": 129, 00:17:46.855 "qid": 0, 00:17:46.855 "state": "enabled", 00:17:46.855 "thread": "nvmf_tgt_poll_group_000", 00:17:46.855 "listen_address": { 00:17:46.855 "trtype": "TCP", 00:17:46.855 "adrfam": "IPv4", 00:17:46.855 "traddr": "10.0.0.2", 00:17:46.855 "trsvcid": "4420" 00:17:46.855 }, 00:17:46.855 "peer_address": { 00:17:46.855 "trtype": "TCP", 00:17:46.855 "adrfam": "IPv4", 00:17:46.855 "traddr": "10.0.0.1", 00:17:46.855 "trsvcid": "55704" 00:17:46.855 }, 00:17:46.855 "auth": { 00:17:46.855 "state": "completed", 00:17:46.855 "digest": "sha512", 00:17:46.855 "dhgroup": "ffdhe6144" 00:17:46.855 } 00:17:46.855 } 00:17:46.855 ]' 00:17:46.855 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:46.855 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:46.855 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:46.855 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:46.855 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:46.855 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.855 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.855 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.113 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-secret DHHC-1:00:MjlmNzExODE4NTc2MmNmYWU0ZjYwMGI2ZjA4YTcyMjRjNGFmOGNmMTI4ZGQ5NTEyJWIzmQ==: --dhchap-ctrl-secret DHHC-1:03:NWE1MzkzODA0NDk2YThjNTlkNDYxNjBhZTliMDlkM2I2MTY3NjFmODQxMDU5MDU4ZjFhNDMxNzg4MDVlOWZjZEEK018=: 00:17:48.048 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:48.048 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:48.048 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:17:48.048 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.048 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.048 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.048 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:48.048 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:48.048 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:48.048 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:17:48.048 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:48.048 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:48.048 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:48.048 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:48.048 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:48.048 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.048 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.048 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.048 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.048 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.048 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.613 00:17:48.613 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:48.613 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:48.613 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.871 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.871 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:48.871 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.871 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.871 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.871 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:48.871 { 00:17:48.871 "cntlid": 131, 00:17:48.871 "qid": 0, 00:17:48.871 "state": "enabled", 00:17:48.871 "thread": "nvmf_tgt_poll_group_000", 00:17:48.871 "listen_address": { 00:17:48.871 "trtype": "TCP", 00:17:48.871 "adrfam": "IPv4", 00:17:48.871 "traddr": "10.0.0.2", 00:17:48.871 "trsvcid": "4420" 00:17:48.871 }, 00:17:48.871 "peer_address": { 00:17:48.871 "trtype": "TCP", 00:17:48.871 "adrfam": "IPv4", 00:17:48.871 "traddr": "10.0.0.1", 00:17:48.871 "trsvcid": "55720" 00:17:48.871 }, 00:17:48.871 "auth": { 00:17:48.871 "state": "completed", 00:17:48.871 "digest": "sha512", 00:17:48.871 "dhgroup": "ffdhe6144" 00:17:48.871 } 00:17:48.871 } 00:17:48.871 ]' 00:17:48.871 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:48.871 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:48.871 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:48.871 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:48.871 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:49.129 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.129 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.129 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:49.388 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-secret DHHC-1:01:NzQ0NzIxYmI0MDRjYjI1MjMzYzJlMmRiNWFlZDFjNjXiFtJQ: --dhchap-ctrl-secret DHHC-1:02:MmVmMmUyZjcxYmMyNjdlNGJiNTg0Yjc2ZTM5MWRkNGQxZTQ1Y2MzNTQwNmViOGQ58n44LA==: 00:17:49.962 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.962 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.962 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:17:49.962 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.962 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.962 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.962 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:49.962 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:49.962 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:50.252 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:17:50.252 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:50.252 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:50.252 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:50.252 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:50.252 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.252 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:50.252 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.252 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.252 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.252 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:50.252 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:50.818 00:17:50.818 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:50.818 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:50.818 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.077 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.077 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:51.077 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.077 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.077 08:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.077 08:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:51.077 { 00:17:51.077 "cntlid": 133, 00:17:51.077 "qid": 0, 00:17:51.077 "state": "enabled", 00:17:51.077 "thread": "nvmf_tgt_poll_group_000", 00:17:51.077 "listen_address": { 00:17:51.077 "trtype": "TCP", 00:17:51.077 "adrfam": "IPv4", 00:17:51.077 "traddr": "10.0.0.2", 00:17:51.077 "trsvcid": "4420" 00:17:51.077 }, 00:17:51.077 "peer_address": { 00:17:51.077 "trtype": "TCP", 00:17:51.077 "adrfam": "IPv4", 00:17:51.077 "traddr": "10.0.0.1", 00:17:51.077 "trsvcid": "55750" 00:17:51.077 }, 00:17:51.077 "auth": { 00:17:51.077 "state": "completed", 00:17:51.077 "digest": "sha512", 00:17:51.077 "dhgroup": "ffdhe6144" 00:17:51.077 } 00:17:51.077 } 00:17:51.077 ]' 00:17:51.077 08:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:51.077 08:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:51.077 08:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:51.077 08:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:51.077 08:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:51.077 08:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.077 08:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.077 08:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.644 08:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-secret DHHC-1:02:YmZlODA0YzdjMGI1ZjgwMjgyZDA3ZGJlYWM3MjAyN2RmMDZmMmJiOTIzY2U3MmYw8KNWIw==: --dhchap-ctrl-secret DHHC-1:01:YWI5ZjhlNjA2NjExMGY3ZTk2ODdhYTE3MTRkODk2MTL2aiAd: 00:17:52.212 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.212 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.212 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:17:52.212 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.212 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.212 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.212 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:52.212 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:52.212 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:52.471 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:17:52.471 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:52.471 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:52.471 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:52.471 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:52.471 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:52.471 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-key key3 00:17:52.471 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.471 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.471 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.471 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:52.471 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:53.038 00:17:53.038 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:53.038 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.038 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:53.297 09:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.297 09:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:53.297 09:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.297 09:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.297 09:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.297 09:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:53.297 { 00:17:53.297 "cntlid": 135, 00:17:53.297 "qid": 0, 00:17:53.297 "state": "enabled", 00:17:53.297 "thread": "nvmf_tgt_poll_group_000", 00:17:53.297 "listen_address": { 00:17:53.297 "trtype": "TCP", 00:17:53.297 "adrfam": "IPv4", 00:17:53.297 "traddr": "10.0.0.2", 00:17:53.297 "trsvcid": "4420" 00:17:53.297 }, 00:17:53.297 "peer_address": { 00:17:53.297 "trtype": "TCP", 00:17:53.297 "adrfam": "IPv4", 00:17:53.297 "traddr": "10.0.0.1", 00:17:53.297 "trsvcid": "55792" 00:17:53.297 }, 00:17:53.297 "auth": { 00:17:53.297 "state": "completed", 00:17:53.297 "digest": "sha512", 00:17:53.297 "dhgroup": "ffdhe6144" 00:17:53.297 } 00:17:53.297 } 00:17:53.297 ]' 00:17:53.297 09:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:53.297 09:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:53.297 09:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:53.297 09:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:53.297 09:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:53.297 09:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:53.297 09:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.555 09:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.814 09:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-secret DHHC-1:03:YjliYzc2OGRhM2Y1YjI4MzAyOGI2NjVmOGRkZmQ3MmQ1NjRhYTEzNDVmNjk2ZWJkOWYzMzk5ODE2M2RkOTI1MlpGH1c=: 00:17:54.381 09:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:54.381 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:54.381 09:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:17:54.381 09:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.381 09:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.381 09:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.381 09:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:54.381 09:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:54.381 09:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:54.381 09:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:54.638 09:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:17:54.638 09:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:54.638 09:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:54.638 09:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:54.638 09:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:54.638 09:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.638 09:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:54.638 09:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.638 09:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.638 09:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.638 09:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:54.638 09:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.200 00:17:55.200 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:55.200 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:55.200 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.769 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.769 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.769 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.769 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.769 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.769 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:55.769 { 00:17:55.769 "cntlid": 137, 00:17:55.769 "qid": 0, 00:17:55.769 "state": "enabled", 00:17:55.769 "thread": "nvmf_tgt_poll_group_000", 00:17:55.769 "listen_address": { 00:17:55.769 "trtype": "TCP", 00:17:55.769 "adrfam": "IPv4", 00:17:55.769 "traddr": "10.0.0.2", 00:17:55.769 "trsvcid": "4420" 00:17:55.769 }, 00:17:55.769 "peer_address": { 00:17:55.769 "trtype": "TCP", 00:17:55.769 "adrfam": "IPv4", 00:17:55.769 "traddr": "10.0.0.1", 00:17:55.769 "trsvcid": "55804" 00:17:55.769 }, 00:17:55.769 "auth": { 00:17:55.769 "state": "completed", 00:17:55.769 "digest": "sha512", 00:17:55.769 "dhgroup": "ffdhe8192" 00:17:55.769 } 00:17:55.769 } 00:17:55.769 ]' 00:17:55.769 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:55.769 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:55.769 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:55.769 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:55.769 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:55.769 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.769 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.769 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.028 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-secret DHHC-1:00:MjlmNzExODE4NTc2MmNmYWU0ZjYwMGI2ZjA4YTcyMjRjNGFmOGNmMTI4ZGQ5NTEyJWIzmQ==: --dhchap-ctrl-secret DHHC-1:03:NWE1MzkzODA0NDk2YThjNTlkNDYxNjBhZTliMDlkM2I2MTY3NjFmODQxMDU5MDU4ZjFhNDMxNzg4MDVlOWZjZEEK018=: 00:17:56.594 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.594 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.594 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:17:56.594 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.594 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.594 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.594 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:56.594 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:56.594 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:56.854 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:17:56.854 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:56.854 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:56.854 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:56.854 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:56.854 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.854 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:56.854 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.854 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.112 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.112 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:57.112 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:57.678 00:17:57.678 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:57.678 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.678 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:57.936 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.936 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:57.936 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.936 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.936 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.936 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:57.936 { 00:17:57.936 "cntlid": 139, 00:17:57.936 "qid": 0, 00:17:57.936 "state": "enabled", 00:17:57.936 "thread": "nvmf_tgt_poll_group_000", 00:17:57.936 "listen_address": { 00:17:57.936 "trtype": "TCP", 00:17:57.936 "adrfam": "IPv4", 00:17:57.936 "traddr": "10.0.0.2", 00:17:57.936 "trsvcid": "4420" 00:17:57.936 }, 00:17:57.936 "peer_address": { 00:17:57.936 "trtype": "TCP", 00:17:57.936 "adrfam": "IPv4", 00:17:57.936 "traddr": "10.0.0.1", 00:17:57.936 "trsvcid": "57902" 00:17:57.936 }, 00:17:57.936 "auth": { 00:17:57.936 "state": "completed", 00:17:57.936 "digest": "sha512", 00:17:57.936 "dhgroup": "ffdhe8192" 00:17:57.936 } 00:17:57.936 } 00:17:57.936 ]' 00:17:57.936 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:57.936 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:57.936 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:57.936 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:57.936 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:57.936 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.936 09:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.936 09:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.551 09:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-secret DHHC-1:01:NzQ0NzIxYmI0MDRjYjI1MjMzYzJlMmRiNWFlZDFjNjXiFtJQ: --dhchap-ctrl-secret DHHC-1:02:MmVmMmUyZjcxYmMyNjdlNGJiNTg0Yjc2ZTM5MWRkNGQxZTQ1Y2MzNTQwNmViOGQ58n44LA==: 00:17:59.119 09:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.119 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.119 09:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:17:59.119 09:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.119 09:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.119 09:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.119 09:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:59.119 09:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:59.119 09:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:59.377 09:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:17:59.377 09:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:59.377 09:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:59.377 09:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:59.377 09:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:59.377 09:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.378 09:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:59.378 09:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.378 09:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.378 09:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.378 09:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:59.378 09:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:59.943 00:17:59.943 09:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:59.943 09:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:59.943 09:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.200 09:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.200 09:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.200 09:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.200 09:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.200 09:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.200 09:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:00.200 { 00:18:00.200 "cntlid": 141, 00:18:00.200 "qid": 0, 00:18:00.200 "state": "enabled", 00:18:00.200 "thread": "nvmf_tgt_poll_group_000", 00:18:00.200 "listen_address": { 00:18:00.200 "trtype": "TCP", 00:18:00.200 "adrfam": "IPv4", 00:18:00.200 "traddr": "10.0.0.2", 00:18:00.200 "trsvcid": "4420" 00:18:00.200 }, 00:18:00.200 "peer_address": { 00:18:00.200 "trtype": "TCP", 00:18:00.200 "adrfam": "IPv4", 00:18:00.200 "traddr": "10.0.0.1", 00:18:00.200 "trsvcid": "57942" 00:18:00.200 }, 00:18:00.200 "auth": { 00:18:00.200 "state": "completed", 00:18:00.200 "digest": "sha512", 00:18:00.200 "dhgroup": "ffdhe8192" 00:18:00.200 } 00:18:00.200 } 00:18:00.200 ]' 00:18:00.200 09:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:00.200 09:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:00.200 09:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:00.458 09:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:00.458 09:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:00.458 09:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.458 09:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.458 09:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.716 09:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-secret DHHC-1:02:YmZlODA0YzdjMGI1ZjgwMjgyZDA3ZGJlYWM3MjAyN2RmMDZmMmJiOTIzY2U3MmYw8KNWIw==: --dhchap-ctrl-secret DHHC-1:01:YWI5ZjhlNjA2NjExMGY3ZTk2ODdhYTE3MTRkODk2MTL2aiAd: 00:18:01.292 09:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.292 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.292 09:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:18:01.292 09:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.292 09:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.292 09:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.292 09:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:01.292 09:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:01.292 09:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:01.550 09:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:18:01.550 09:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:01.550 09:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:01.550 09:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:01.550 09:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:01.550 09:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:01.550 09:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-key key3 00:18:01.550 09:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.550 09:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.550 09:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.550 09:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:01.550 09:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:02.116 00:18:02.116 09:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:02.116 09:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:02.116 09:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.375 09:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.375 09:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.375 09:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.375 09:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.375 09:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.375 09:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:02.375 { 00:18:02.375 "cntlid": 143, 00:18:02.375 "qid": 0, 00:18:02.375 "state": "enabled", 00:18:02.375 "thread": "nvmf_tgt_poll_group_000", 00:18:02.375 "listen_address": { 00:18:02.375 "trtype": "TCP", 00:18:02.375 "adrfam": "IPv4", 00:18:02.375 "traddr": "10.0.0.2", 00:18:02.375 "trsvcid": "4420" 00:18:02.375 }, 00:18:02.375 "peer_address": { 00:18:02.375 "trtype": "TCP", 00:18:02.375 "adrfam": "IPv4", 00:18:02.375 "traddr": "10.0.0.1", 00:18:02.375 "trsvcid": "57952" 00:18:02.375 }, 00:18:02.375 "auth": { 00:18:02.375 "state": "completed", 00:18:02.375 "digest": "sha512", 00:18:02.375 "dhgroup": "ffdhe8192" 00:18:02.375 } 00:18:02.375 } 00:18:02.375 ]' 00:18:02.375 09:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:02.633 09:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:02.633 09:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:02.633 09:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:02.633 09:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:02.633 09:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.633 09:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.633 09:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.891 09:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-secret DHHC-1:03:YjliYzc2OGRhM2Y1YjI4MzAyOGI2NjVmOGRkZmQ3MmQ1NjRhYTEzNDVmNjk2ZWJkOWYzMzk5ODE2M2RkOTI1MlpGH1c=: 00:18:03.826 09:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.826 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.826 09:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:18:03.826 09:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.826 09:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.826 09:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.826 09:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:18:03.826 09:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:18:03.826 09:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:18:03.826 09:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:03.826 09:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:03.826 09:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:03.826 09:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:18:03.826 09:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:03.826 09:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:03.826 09:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:03.826 09:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:03.826 09:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.826 09:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.826 09:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.826 09:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.826 09:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.826 09:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.826 09:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.761 00:18:04.761 09:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:04.761 09:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.761 09:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:04.761 09:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.761 09:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.761 09:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.761 09:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.761 09:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.761 09:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:04.761 { 00:18:04.761 "cntlid": 145, 00:18:04.761 "qid": 0, 00:18:04.761 "state": "enabled", 00:18:04.761 "thread": "nvmf_tgt_poll_group_000", 00:18:04.761 "listen_address": { 00:18:04.761 "trtype": "TCP", 00:18:04.761 "adrfam": "IPv4", 00:18:04.761 "traddr": "10.0.0.2", 00:18:04.761 "trsvcid": "4420" 00:18:04.761 }, 00:18:04.761 "peer_address": { 00:18:04.761 "trtype": "TCP", 00:18:04.761 "adrfam": "IPv4", 00:18:04.761 "traddr": "10.0.0.1", 00:18:04.761 "trsvcid": "57982" 00:18:04.761 }, 00:18:04.761 "auth": { 00:18:04.761 "state": "completed", 00:18:04.761 "digest": "sha512", 00:18:04.761 "dhgroup": "ffdhe8192" 00:18:04.761 } 00:18:04.761 } 00:18:04.761 ]' 00:18:04.761 09:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:05.018 09:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:05.018 09:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:05.018 09:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:05.018 09:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:05.018 09:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.018 09:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.018 09:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.277 09:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-secret DHHC-1:00:MjlmNzExODE4NTc2MmNmYWU0ZjYwMGI2ZjA4YTcyMjRjNGFmOGNmMTI4ZGQ5NTEyJWIzmQ==: --dhchap-ctrl-secret DHHC-1:03:NWE1MzkzODA0NDk2YThjNTlkNDYxNjBhZTliMDlkM2I2MTY3NjFmODQxMDU5MDU4ZjFhNDMxNzg4MDVlOWZjZEEK018=: 00:18:06.212 09:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.212 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.212 09:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:18:06.212 09:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.212 09:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.212 09:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.212 09:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-key key1 00:18:06.212 09:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.212 09:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.212 09:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.212 09:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:06.212 09:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:06.212 09:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:06.212 09:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:06.212 09:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:06.212 09:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:06.212 09:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:06.212 09:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:06.212 09:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:06.470 request: 00:18:06.470 { 00:18:06.470 "name": "nvme0", 00:18:06.470 "trtype": "tcp", 00:18:06.470 "traddr": "10.0.0.2", 00:18:06.470 "adrfam": "ipv4", 00:18:06.470 "trsvcid": "4420", 00:18:06.470 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:06.470 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5", 00:18:06.470 "prchk_reftag": false, 00:18:06.470 "prchk_guard": false, 00:18:06.470 "hdgst": false, 00:18:06.470 "ddgst": false, 00:18:06.470 "dhchap_key": "key2", 00:18:06.470 "method": "bdev_nvme_attach_controller", 00:18:06.470 "req_id": 1 00:18:06.470 } 00:18:06.470 Got JSON-RPC error response 00:18:06.470 response: 00:18:06.470 { 00:18:06.470 "code": -5, 00:18:06.470 "message": "Input/output error" 00:18:06.470 } 00:18:06.728 09:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:06.728 09:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:06.728 09:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:06.728 09:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:06.728 09:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:18:06.728 09:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.728 09:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.729 09:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.729 09:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:06.729 09:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.729 09:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.729 09:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.729 09:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:06.729 09:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:06.729 09:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:06.729 09:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:06.729 09:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:06.729 09:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:06.729 09:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:06.729 09:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:06.729 09:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:07.303 request: 00:18:07.303 { 00:18:07.303 "name": "nvme0", 00:18:07.303 "trtype": "tcp", 00:18:07.303 "traddr": "10.0.0.2", 00:18:07.303 "adrfam": "ipv4", 00:18:07.303 "trsvcid": "4420", 00:18:07.303 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:07.303 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5", 00:18:07.303 "prchk_reftag": false, 00:18:07.303 "prchk_guard": false, 00:18:07.303 "hdgst": false, 00:18:07.303 "ddgst": false, 00:18:07.303 "dhchap_key": "key1", 00:18:07.303 "dhchap_ctrlr_key": "ckey2", 00:18:07.303 "method": "bdev_nvme_attach_controller", 00:18:07.303 "req_id": 1 00:18:07.303 } 00:18:07.303 Got JSON-RPC error response 00:18:07.303 response: 00:18:07.303 { 00:18:07.303 "code": -5, 00:18:07.303 "message": "Input/output error" 00:18:07.303 } 00:18:07.303 09:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:07.303 09:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:07.303 09:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:07.303 09:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:07.303 09:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:18:07.303 09:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.303 09:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.303 09:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.303 09:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-key key1 00:18:07.303 09:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.303 09:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.303 09:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.304 09:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.304 09:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:07.304 09:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.304 09:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:07.304 09:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:07.304 09:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:07.304 09:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:07.304 09:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.304 09:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.871 request: 00:18:07.871 { 00:18:07.871 "name": "nvme0", 00:18:07.871 "trtype": "tcp", 00:18:07.871 "traddr": "10.0.0.2", 00:18:07.871 "adrfam": "ipv4", 00:18:07.871 "trsvcid": "4420", 00:18:07.871 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:07.871 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5", 00:18:07.871 "prchk_reftag": false, 00:18:07.871 "prchk_guard": false, 00:18:07.871 "hdgst": false, 00:18:07.871 "ddgst": false, 00:18:07.871 "dhchap_key": "key1", 00:18:07.871 "dhchap_ctrlr_key": "ckey1", 00:18:07.871 "method": "bdev_nvme_attach_controller", 00:18:07.871 "req_id": 1 00:18:07.871 } 00:18:07.872 Got JSON-RPC error response 00:18:07.872 response: 00:18:07.872 { 00:18:07.872 "code": -5, 00:18:07.872 "message": "Input/output error" 00:18:07.872 } 00:18:07.872 09:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:07.872 09:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:07.872 09:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:07.872 09:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:07.872 09:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:18:07.872 09:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.872 09:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.872 09:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.872 09:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 71611 00:18:07.872 09:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 71611 ']' 00:18:07.872 09:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 71611 00:18:07.872 09:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:18:07.872 09:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:07.872 09:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71611 00:18:07.872 killing process with pid 71611 00:18:07.872 09:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:07.872 09:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:07.872 09:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71611' 00:18:07.872 09:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 71611 00:18:07.872 09:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 71611 00:18:09.246 09:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:18:09.246 09:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:09.246 09:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:09.246 09:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.246 09:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=74651 00:18:09.246 09:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:18:09.246 09:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 74651 00:18:09.246 09:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 74651 ']' 00:18:09.246 09:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:09.246 09:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:09.246 09:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:09.246 09:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:09.246 09:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.226 09:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:10.226 09:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:10.226 09:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:10.226 09:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:10.226 09:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.226 09:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:10.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:10.226 09:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:10.226 09:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 74651 00:18:10.226 09:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 74651 ']' 00:18:10.226 09:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:10.226 09:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:10.226 09:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:10.226 09:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:10.226 09:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.485 09:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:10.485 09:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:10.485 09:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:18:10.485 09:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.485 09:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.743 09:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.743 09:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:18:10.743 09:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:10.743 09:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:10.743 09:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:10.743 09:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:10.743 09:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:10.743 09:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-key key3 00:18:10.743 09:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.743 09:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.743 09:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.743 09:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:10.743 09:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:11.675 00:18:11.675 09:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:11.675 09:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.675 09:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:11.932 09:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.932 09:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.932 09:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.932 09:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.932 09:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.932 09:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:11.932 { 00:18:11.932 "cntlid": 1, 00:18:11.932 "qid": 0, 00:18:11.932 "state": "enabled", 00:18:11.932 "thread": "nvmf_tgt_poll_group_000", 00:18:11.932 "listen_address": { 00:18:11.932 "trtype": "TCP", 00:18:11.932 "adrfam": "IPv4", 00:18:11.932 "traddr": "10.0.0.2", 00:18:11.932 "trsvcid": "4420" 00:18:11.932 }, 00:18:11.932 "peer_address": { 00:18:11.932 "trtype": "TCP", 00:18:11.932 "adrfam": "IPv4", 00:18:11.932 "traddr": "10.0.0.1", 00:18:11.932 "trsvcid": "33376" 00:18:11.932 }, 00:18:11.932 "auth": { 00:18:11.932 "state": "completed", 00:18:11.932 "digest": "sha512", 00:18:11.932 "dhgroup": "ffdhe8192" 00:18:11.932 } 00:18:11.932 } 00:18:11.932 ]' 00:18:11.932 09:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:11.932 09:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:11.932 09:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:11.932 09:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:11.932 09:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:11.932 09:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.932 09:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.933 09:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.199 09:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-secret DHHC-1:03:YjliYzc2OGRhM2Y1YjI4MzAyOGI2NjVmOGRkZmQ3MmQ1NjRhYTEzNDVmNjk2ZWJkOWYzMzk5ODE2M2RkOTI1MlpGH1c=: 00:18:13.170 09:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.170 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.170 09:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:18:13.170 09:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.170 09:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.170 09:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.170 09:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --dhchap-key key3 00:18:13.170 09:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.170 09:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.170 09:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.170 09:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:18:13.170 09:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:18:13.428 09:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:13.428 09:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:13.428 09:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:13.428 09:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:13.428 09:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:13.428 09:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:13.428 09:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:13.428 09:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:13.428 09:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:13.736 request: 00:18:13.736 { 00:18:13.736 "name": "nvme0", 00:18:13.736 "trtype": "tcp", 00:18:13.736 "traddr": "10.0.0.2", 00:18:13.736 "adrfam": "ipv4", 00:18:13.736 "trsvcid": "4420", 00:18:13.736 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:13.736 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5", 00:18:13.736 "prchk_reftag": false, 00:18:13.736 "prchk_guard": false, 00:18:13.736 "hdgst": false, 00:18:13.736 "ddgst": false, 00:18:13.736 "dhchap_key": "key3", 00:18:13.736 "method": "bdev_nvme_attach_controller", 00:18:13.736 "req_id": 1 00:18:13.736 } 00:18:13.736 Got JSON-RPC error response 00:18:13.736 response: 00:18:13.736 { 00:18:13.736 "code": -5, 00:18:13.736 "message": "Input/output error" 00:18:13.736 } 00:18:13.736 09:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:13.736 09:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:13.736 09:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:13.736 09:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:13.736 09:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:18:13.736 09:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:18:13.736 09:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:13.736 09:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:13.994 09:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:13.994 09:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:13.994 09:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:13.994 09:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:13.994 09:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:13.994 09:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:13.994 09:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:13.995 09:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:13.995 09:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:14.252 request: 00:18:14.252 { 00:18:14.252 "name": "nvme0", 00:18:14.253 "trtype": "tcp", 00:18:14.253 "traddr": "10.0.0.2", 00:18:14.253 "adrfam": "ipv4", 00:18:14.253 "trsvcid": "4420", 00:18:14.253 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:14.253 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5", 00:18:14.253 "prchk_reftag": false, 00:18:14.253 "prchk_guard": false, 00:18:14.253 "hdgst": false, 00:18:14.253 "ddgst": false, 00:18:14.253 "dhchap_key": "key3", 00:18:14.253 "method": "bdev_nvme_attach_controller", 00:18:14.253 "req_id": 1 00:18:14.253 } 00:18:14.253 Got JSON-RPC error response 00:18:14.253 response: 00:18:14.253 { 00:18:14.253 "code": -5, 00:18:14.253 "message": "Input/output error" 00:18:14.253 } 00:18:14.253 09:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:14.253 09:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:14.253 09:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:14.253 09:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:14.253 09:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:18:14.253 09:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:18:14.253 09:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:18:14.253 09:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:14.253 09:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:14.253 09:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:14.511 09:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:18:14.511 09:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.511 09:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.511 09:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.511 09:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:18:14.511 09:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.511 09:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.511 09:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.511 09:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:14.511 09:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:14.511 09:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:14.511 09:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:14.511 09:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:14.511 09:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:14.511 09:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:14.511 09:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:14.511 09:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:14.769 request: 00:18:14.769 { 00:18:14.769 "name": "nvme0", 00:18:14.769 "trtype": "tcp", 00:18:14.769 "traddr": "10.0.0.2", 00:18:14.769 "adrfam": "ipv4", 00:18:14.769 "trsvcid": "4420", 00:18:14.769 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:14.769 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5", 00:18:14.769 "prchk_reftag": false, 00:18:14.769 "prchk_guard": false, 00:18:14.769 "hdgst": false, 00:18:14.769 "ddgst": false, 00:18:14.769 "dhchap_key": "key0", 00:18:14.769 "dhchap_ctrlr_key": "key1", 00:18:14.769 "method": "bdev_nvme_attach_controller", 00:18:14.769 "req_id": 1 00:18:14.769 } 00:18:14.769 Got JSON-RPC error response 00:18:14.769 response: 00:18:14.769 { 00:18:14.769 "code": -5, 00:18:14.769 "message": "Input/output error" 00:18:14.769 } 00:18:14.769 09:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:14.769 09:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:14.769 09:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:14.769 09:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:14.769 09:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:14.769 09:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:15.028 00:18:15.028 09:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:18:15.028 09:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.028 09:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:18:15.595 09:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.595 09:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.595 09:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.853 09:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:18:15.853 09:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:18:15.853 09:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 71643 00:18:15.853 09:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 71643 ']' 00:18:15.853 09:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 71643 00:18:15.853 09:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:18:15.853 09:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:15.853 09:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71643 00:18:15.853 killing process with pid 71643 00:18:15.853 09:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:15.853 09:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:15.853 09:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71643' 00:18:15.853 09:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 71643 00:18:15.853 09:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 71643 00:18:18.411 09:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:18.411 09:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:18.411 09:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:18:18.411 09:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:18.411 09:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:18:18.411 09:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:18.412 09:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:18.412 rmmod nvme_tcp 00:18:18.412 rmmod nvme_fabrics 00:18:18.412 rmmod nvme_keyring 00:18:18.412 09:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:18.412 09:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:18:18.412 09:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:18:18.412 09:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 74651 ']' 00:18:18.412 09:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 74651 00:18:18.412 09:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 74651 ']' 00:18:18.412 09:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 74651 00:18:18.412 09:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:18:18.412 09:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:18.412 09:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74651 00:18:18.412 killing process with pid 74651 00:18:18.412 09:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:18.412 09:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:18.412 09:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74651' 00:18:18.412 09:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 74651 00:18:18.412 09:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 74651 00:18:19.342 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:19.342 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:19.342 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:19.342 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:19.342 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:19.342 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:19.342 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:19.342 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:19.342 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:19.342 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.kiq /tmp/spdk.key-sha256.Tah /tmp/spdk.key-sha384.ql9 /tmp/spdk.key-sha512.qZq /tmp/spdk.key-sha512.yp5 /tmp/spdk.key-sha384.rPT /tmp/spdk.key-sha256.8PC '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:18:19.342 00:18:19.342 real 2m55.799s 00:18:19.342 user 6m57.605s 00:18:19.342 sys 0m27.092s 00:18:19.342 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:19.342 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.342 ************************************ 00:18:19.342 END TEST nvmf_auth_target 00:18:19.342 ************************************ 00:18:19.601 09:00:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:18:19.601 09:00:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:19.601 09:00:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:18:19.601 09:00:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:19.601 09:00:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:19.601 ************************************ 00:18:19.601 START TEST nvmf_bdevio_no_huge 00:18:19.601 ************************************ 00:18:19.601 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:19.601 * Looking for test storage... 00:18:19.601 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:19.601 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:19.601 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:19.601 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:19.601 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:19.601 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:19.601 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:19.601 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:19.601 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:19.601 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:19.601 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:19.601 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:19.601 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:19.601 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:18:19.601 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:18:19.601 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:19.601 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:19.601 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:19.601 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:19.601 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:19.601 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:19.601 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:19.601 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:19.601 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.601 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.601 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.601 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:19.601 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.601 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:18:19.601 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:19.601 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:19.601 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:19.601 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:19.601 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:19.601 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:19.601 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:19.601 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:19.601 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:19.601 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:19.601 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:19.601 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:19.601 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:19.601 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:19.601 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:19.601 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:19.601 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:19.601 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:19.601 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:19.601 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:19.601 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:19.601 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:19.601 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:19.601 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:19.601 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:19.601 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:19.602 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:19.602 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:19.602 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:19.602 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:19.602 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:19.602 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:19.602 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:19.602 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:19.602 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:19.602 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:19.602 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:19.602 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:19.602 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:19.602 Cannot find device "nvmf_tgt_br" 00:18:19.602 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # true 00:18:19.602 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:19.602 Cannot find device "nvmf_tgt_br2" 00:18:19.602 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # true 00:18:19.602 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:19.602 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:19.602 Cannot find device "nvmf_tgt_br" 00:18:19.602 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # true 00:18:19.602 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:19.602 Cannot find device "nvmf_tgt_br2" 00:18:19.602 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # true 00:18:19.602 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:19.602 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:19.602 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:19.602 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:19.602 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:18:19.602 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:19.861 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:19.861 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:18:19.861 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:19.861 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:19.861 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:19.861 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:19.861 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:19.861 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:19.861 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:19.861 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:19.861 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:19.861 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:19.861 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:19.861 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:19.861 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:19.861 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:19.861 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:19.861 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:19.861 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:19.861 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:19.861 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:19.861 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:19.861 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:19.861 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:19.861 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:19.861 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:19.861 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:19.861 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.112 ms 00:18:19.861 00:18:19.861 --- 10.0.0.2 ping statistics --- 00:18:19.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:19.861 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:18:19.861 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:19.861 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:19.861 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:18:19.861 00:18:19.861 --- 10.0.0.3 ping statistics --- 00:18:19.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:19.861 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:18:19.861 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:19.861 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:19.861 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:18:19.861 00:18:19.861 --- 10.0.0.1 ping statistics --- 00:18:19.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:19.861 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:18:19.861 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:19.861 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@433 -- # return 0 00:18:19.861 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:19.861 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:19.861 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:19.861 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:19.861 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:19.861 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:19.861 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:19.861 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:19.861 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:19.861 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:19.861 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:19.861 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=75016 00:18:19.861 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:18:19.861 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 75016 00:18:19.861 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 75016 ']' 00:18:19.862 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:19.862 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:19.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:19.862 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:19.862 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:19.862 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:20.119 [2024-07-25 09:00:27.047899] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:18:20.119 [2024-07-25 09:00:27.048059] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:18:20.377 [2024-07-25 09:00:27.239370] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:20.635 [2024-07-25 09:00:27.537367] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:20.635 [2024-07-25 09:00:27.537443] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:20.635 [2024-07-25 09:00:27.537479] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:20.635 [2024-07-25 09:00:27.537493] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:20.635 [2024-07-25 09:00:27.537508] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:20.635 [2024-07-25 09:00:27.537724] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:18:20.635 [2024-07-25 09:00:27.537838] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:18:20.635 [2024-07-25 09:00:27.538297] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:20.635 [2024-07-25 09:00:27.538314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:18:20.635 [2024-07-25 09:00:27.697545] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:20.894 09:00:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:20.894 09:00:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:18:20.894 09:00:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:20.894 09:00:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:20.894 09:00:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:21.160 09:00:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:21.160 09:00:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:21.160 09:00:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.160 09:00:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:21.160 [2024-07-25 09:00:28.044907] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:21.160 09:00:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.160 09:00:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:21.160 09:00:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.160 09:00:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:21.160 Malloc0 00:18:21.160 09:00:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.160 09:00:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:21.161 09:00:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.161 09:00:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:21.161 09:00:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.161 09:00:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:21.161 09:00:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.161 09:00:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:21.161 09:00:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.161 09:00:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:21.161 09:00:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.161 09:00:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:21.161 [2024-07-25 09:00:28.144605] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:21.161 09:00:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.161 09:00:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:18:21.161 09:00:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:21.161 09:00:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:18:21.161 09:00:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:18:21.161 09:00:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:21.161 09:00:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:21.161 { 00:18:21.161 "params": { 00:18:21.161 "name": "Nvme$subsystem", 00:18:21.161 "trtype": "$TEST_TRANSPORT", 00:18:21.161 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:21.161 "adrfam": "ipv4", 00:18:21.161 "trsvcid": "$NVMF_PORT", 00:18:21.161 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:21.161 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:21.161 "hdgst": ${hdgst:-false}, 00:18:21.161 "ddgst": ${ddgst:-false} 00:18:21.161 }, 00:18:21.161 "method": "bdev_nvme_attach_controller" 00:18:21.161 } 00:18:21.161 EOF 00:18:21.161 )") 00:18:21.161 09:00:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:18:21.161 09:00:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:18:21.161 09:00:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:18:21.161 09:00:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:21.161 "params": { 00:18:21.161 "name": "Nvme1", 00:18:21.161 "trtype": "tcp", 00:18:21.161 "traddr": "10.0.0.2", 00:18:21.161 "adrfam": "ipv4", 00:18:21.161 "trsvcid": "4420", 00:18:21.161 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:21.161 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:21.161 "hdgst": false, 00:18:21.161 "ddgst": false 00:18:21.161 }, 00:18:21.161 "method": "bdev_nvme_attach_controller" 00:18:21.161 }' 00:18:21.161 [2024-07-25 09:00:28.255692] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:18:21.161 [2024-07-25 09:00:28.255884] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid75052 ] 00:18:21.422 [2024-07-25 09:00:28.460676] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:21.679 [2024-07-25 09:00:28.766186] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:21.679 [2024-07-25 09:00:28.766279] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:21.679 [2024-07-25 09:00:28.766280] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:21.937 [2024-07-25 09:00:28.944364] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:22.195 I/O targets: 00:18:22.195 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:22.195 00:18:22.195 00:18:22.195 CUnit - A unit testing framework for C - Version 2.1-3 00:18:22.195 http://cunit.sourceforge.net/ 00:18:22.195 00:18:22.195 00:18:22.195 Suite: bdevio tests on: Nvme1n1 00:18:22.195 Test: blockdev write read block ...passed 00:18:22.195 Test: blockdev write zeroes read block ...passed 00:18:22.195 Test: blockdev write zeroes read no split ...passed 00:18:22.195 Test: blockdev write zeroes read split ...passed 00:18:22.195 Test: blockdev write zeroes read split partial ...passed 00:18:22.195 Test: blockdev reset ...[2024-07-25 09:00:29.259155] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:22.195 [2024-07-25 09:00:29.259401] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000029c00 (9): Bad file descriptor 00:18:22.195 [2024-07-25 09:00:29.271839] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:22.195 passed 00:18:22.195 Test: blockdev write read 8 blocks ...passed 00:18:22.195 Test: blockdev write read size > 128k ...passed 00:18:22.195 Test: blockdev write read invalid size ...passed 00:18:22.195 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:22.195 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:22.195 Test: blockdev write read max offset ...passed 00:18:22.195 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:22.195 Test: blockdev writev readv 8 blocks ...passed 00:18:22.195 Test: blockdev writev readv 30 x 1block ...passed 00:18:22.195 Test: blockdev writev readv block ...passed 00:18:22.195 Test: blockdev writev readv size > 128k ...passed 00:18:22.195 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:22.195 Test: blockdev comparev and writev ...[2024-07-25 09:00:29.284585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:22.195 [2024-07-25 09:00:29.284654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.195 [2024-07-25 09:00:29.284690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:22.195 [2024-07-25 09:00:29.284712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:22.195 [2024-07-25 09:00:29.285245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:22.195 [2024-07-25 09:00:29.285292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:22.195 [2024-07-25 09:00:29.285322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:22.195 [2024-07-25 09:00:29.285343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:22.195 [2024-07-25 09:00:29.285961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:22.195 [2024-07-25 09:00:29.286007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:22.195 [2024-07-25 09:00:29.286036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:22.195 [2024-07-25 09:00:29.286059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:22.195 [2024-07-25 09:00:29.286586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:22.195 [2024-07-25 09:00:29.286638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:22.195 [2024-07-25 09:00:29.286667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:22.195 [2024-07-25 09:00:29.286687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:22.195 passed 00:18:22.196 Test: blockdev nvme passthru rw ...passed 00:18:22.196 Test: blockdev nvme passthru vendor specific ...[2024-07-25 09:00:29.287722] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:22.196 [2024-07-25 09:00:29.287774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:22.196 [2024-07-25 09:00:29.287947] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:22.196 [2024-07-25 09:00:29.287978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:22.196 [2024-07-25 09:00:29.288141] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:22.196 [2024-07-25 09:00:29.288183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:22.196 [2024-07-25 09:00:29.288362] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:22.196 [2024-07-25 09:00:29.288398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:22.196 passed 00:18:22.196 Test: blockdev nvme admin passthru ...passed 00:18:22.196 Test: blockdev copy ...passed 00:18:22.196 00:18:22.196 Run Summary: Type Total Ran Passed Failed Inactive 00:18:22.196 suites 1 1 n/a 0 0 00:18:22.196 tests 23 23 23 0 0 00:18:22.196 asserts 152 152 152 0 n/a 00:18:22.196 00:18:22.196 Elapsed time = 0.226 seconds 00:18:23.128 09:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:23.128 09:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.128 09:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:23.128 09:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.128 09:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:23.128 09:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:18:23.128 09:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:23.128 09:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:18:23.128 09:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:23.128 09:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:18:23.128 09:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:23.128 09:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:23.128 rmmod nvme_tcp 00:18:23.128 rmmod nvme_fabrics 00:18:23.128 rmmod nvme_keyring 00:18:23.128 09:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:23.128 09:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:18:23.128 09:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:18:23.128 09:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 75016 ']' 00:18:23.128 09:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 75016 00:18:23.128 09:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 75016 ']' 00:18:23.128 09:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 75016 00:18:23.128 09:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:18:23.128 09:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:23.128 09:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75016 00:18:23.128 09:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:18:23.128 09:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:18:23.128 killing process with pid 75016 00:18:23.128 09:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75016' 00:18:23.128 09:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 75016 00:18:23.128 09:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 75016 00:18:24.063 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:24.063 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:24.063 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:24.063 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:24.063 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:24.063 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:24.063 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:24.063 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:24.063 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:24.063 00:18:24.063 real 0m4.610s 00:18:24.063 user 0m16.411s 00:18:24.063 sys 0m1.526s 00:18:24.063 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:24.063 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:24.063 ************************************ 00:18:24.063 END TEST nvmf_bdevio_no_huge 00:18:24.063 ************************************ 00:18:24.063 09:00:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:24.063 09:00:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:24.063 09:00:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:24.063 09:00:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:24.063 ************************************ 00:18:24.063 START TEST nvmf_tls 00:18:24.063 ************************************ 00:18:24.063 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:24.378 * Looking for test storage... 00:18:24.378 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:24.378 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:24.378 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:18:24.378 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:24.378 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:24.378 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:24.378 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:24.378 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:24.378 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:24.378 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:24.378 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:24.378 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:24.378 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:24.378 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:18:24.378 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:18:24.378 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:24.378 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:24.378 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:24.378 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:24.378 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:24.378 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:24.378 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:24.378 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:24.378 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.378 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.378 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.378 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:18:24.378 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.378 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:18:24.378 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:24.378 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:24.378 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:24.378 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:24.378 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:24.378 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:24.378 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:24.378 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:24.378 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:24.378 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:18:24.378 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:24.378 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:24.379 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:24.379 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:24.379 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:24.379 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:24.379 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:24.379 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:24.379 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:24.379 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:24.379 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:24.379 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:24.379 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:24.379 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:24.379 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:24.379 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:24.379 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:24.379 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:24.379 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:24.379 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:24.379 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:24.379 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:24.379 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:24.379 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:24.379 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:24.379 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:24.379 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:24.379 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:24.379 Cannot find device "nvmf_tgt_br" 00:18:24.379 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # true 00:18:24.379 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:24.379 Cannot find device "nvmf_tgt_br2" 00:18:24.379 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # true 00:18:24.379 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:24.379 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:24.379 Cannot find device "nvmf_tgt_br" 00:18:24.379 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # true 00:18:24.379 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:24.379 Cannot find device "nvmf_tgt_br2" 00:18:24.379 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # true 00:18:24.379 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:24.379 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:24.379 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:24.379 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:24.379 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:18:24.379 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:24.379 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:24.379 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:18:24.379 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:24.379 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:24.379 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:24.379 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:24.379 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:24.379 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:24.379 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:24.379 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:24.379 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:24.379 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:24.379 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:24.379 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:24.379 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:24.379 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:24.379 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:24.637 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:24.637 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:24.637 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:24.637 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:24.637 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:24.637 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:24.637 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:24.637 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:24.637 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:24.637 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:24.637 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:18:24.637 00:18:24.637 --- 10.0.0.2 ping statistics --- 00:18:24.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:24.637 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:18:24.637 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:24.637 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:24.637 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:18:24.637 00:18:24.637 --- 10.0.0.3 ping statistics --- 00:18:24.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:24.637 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:18:24.637 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:24.637 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:24.637 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:18:24.637 00:18:24.637 --- 10.0.0.1 ping statistics --- 00:18:24.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:24.637 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:18:24.637 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:24.637 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@433 -- # return 0 00:18:24.637 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:24.637 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:24.637 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:24.637 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:24.637 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:24.637 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:24.637 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:24.637 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:18:24.637 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:24.637 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:24.637 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:24.637 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=75276 00:18:24.637 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:18:24.637 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 75276 00:18:24.637 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 75276 ']' 00:18:24.637 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:24.637 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:24.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:24.637 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:24.638 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:24.638 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:24.638 [2024-07-25 09:00:31.721529] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:18:24.638 [2024-07-25 09:00:31.721734] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:24.895 [2024-07-25 09:00:31.901663] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:25.154 [2024-07-25 09:00:32.189887] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:25.154 [2024-07-25 09:00:32.189997] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:25.154 [2024-07-25 09:00:32.190015] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:25.154 [2024-07-25 09:00:32.190030] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:25.154 [2024-07-25 09:00:32.190042] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:25.154 [2024-07-25 09:00:32.190105] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:25.720 09:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:25.720 09:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:25.720 09:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:25.720 09:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:25.720 09:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:25.721 09:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:25.721 09:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:18:25.721 09:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:18:25.979 true 00:18:25.979 09:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:18:25.979 09:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:26.237 09:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # version=0 00:18:26.237 09:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:18:26.237 09:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:26.494 09:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:26.495 09:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:18:26.751 09:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # version=13 00:18:26.751 09:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:18:26.751 09:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:18:27.010 09:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:27.010 09:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:18:27.269 09:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # version=7 00:18:27.269 09:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:18:27.269 09:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:27.269 09:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:18:27.528 09:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:18:27.528 09:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:18:27.528 09:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:18:27.788 09:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:27.788 09:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:18:28.047 09:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:18:28.047 09:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:18:28.047 09:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:18:28.306 09:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:28.306 09:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:18:28.306 09:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:18:28.306 09:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:18:28.306 09:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:18:28.306 09:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:18:28.306 09:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:18:28.306 09:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:18:28.306 09:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:18:28.306 09:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:18:28.564 09:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:18:28.564 09:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:28.564 09:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:18:28.564 09:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:18:28.564 09:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:18:28.564 09:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:18:28.564 09:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:18:28.564 09:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:18:28.564 09:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:18:28.564 09:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:28.564 09:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:18:28.564 09:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.xAdzEVdPhK 00:18:28.564 09:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:18:28.564 09:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.fK4eOd9TUo 00:18:28.564 09:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:28.564 09:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:28.564 09:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.xAdzEVdPhK 00:18:28.564 09:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.fK4eOd9TUo 00:18:28.564 09:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:28.822 09:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:18:29.396 [2024-07-25 09:00:36.279761] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:29.396 09:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.xAdzEVdPhK 00:18:29.396 09:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.xAdzEVdPhK 00:18:29.396 09:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:29.654 [2024-07-25 09:00:36.673357] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:29.654 09:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:29.912 09:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:30.170 [2024-07-25 09:00:37.141532] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:30.170 [2024-07-25 09:00:37.141834] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:30.170 09:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:30.461 malloc0 00:18:30.461 09:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:30.719 09:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.xAdzEVdPhK 00:18:30.976 [2024-07-25 09:00:37.931496] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:30.976 09:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.xAdzEVdPhK 00:18:43.179 Initializing NVMe Controllers 00:18:43.179 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:43.179 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:43.179 Initialization complete. Launching workers. 00:18:43.179 ======================================================== 00:18:43.179 Latency(us) 00:18:43.179 Device Information : IOPS MiB/s Average min max 00:18:43.179 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6481.86 25.32 9877.47 1832.53 12352.89 00:18:43.179 ======================================================== 00:18:43.179 Total : 6481.86 25.32 9877.47 1832.53 12352.89 00:18:43.179 00:18:43.179 09:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.xAdzEVdPhK 00:18:43.179 09:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:43.179 09:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:43.179 09:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:43.179 09:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.xAdzEVdPhK' 00:18:43.179 09:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:43.179 09:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=75515 00:18:43.180 09:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:43.180 09:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:43.180 09:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 75515 /var/tmp/bdevperf.sock 00:18:43.180 09:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 75515 ']' 00:18:43.180 09:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:43.180 09:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:43.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:43.180 09:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:43.180 09:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:43.180 09:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:43.180 [2024-07-25 09:00:48.429310] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:18:43.180 [2024-07-25 09:00:48.429584] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75515 ] 00:18:43.180 [2024-07-25 09:00:48.609784] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:43.180 [2024-07-25 09:00:48.844922] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:43.180 [2024-07-25 09:00:49.047623] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:43.180 09:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:43.180 09:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:43.180 09:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.xAdzEVdPhK 00:18:43.180 [2024-07-25 09:00:49.458308] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:43.180 [2024-07-25 09:00:49.458501] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:43.180 TLSTESTn1 00:18:43.180 09:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:43.180 Running I/O for 10 seconds... 00:18:53.152 00:18:53.152 Latency(us) 00:18:53.152 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:53.152 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:53.152 Verification LBA range: start 0x0 length 0x2000 00:18:53.152 TLSTESTn1 : 10.04 2800.11 10.94 0.00 0.00 45590.65 9055.88 27763.43 00:18:53.152 =================================================================================================================== 00:18:53.152 Total : 2800.11 10.94 0.00 0.00 45590.65 9055.88 27763.43 00:18:53.152 0 00:18:53.152 09:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:53.152 09:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 75515 00:18:53.152 09:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 75515 ']' 00:18:53.152 09:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 75515 00:18:53.152 09:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:53.152 09:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:53.152 09:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75515 00:18:53.152 killing process with pid 75515 00:18:53.152 Received shutdown signal, test time was about 10.000000 seconds 00:18:53.152 00:18:53.152 Latency(us) 00:18:53.152 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:53.152 =================================================================================================================== 00:18:53.152 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:53.152 09:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:18:53.152 09:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:18:53.152 09:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75515' 00:18:53.152 09:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 75515 00:18:53.152 [2024-07-25 09:00:59.763301] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:53.152 09:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 75515 00:18:54.084 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.fK4eOd9TUo 00:18:54.084 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:54.084 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.fK4eOd9TUo 00:18:54.084 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:18:54.084 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:54.084 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:18:54.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:54.084 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:54.084 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.fK4eOd9TUo 00:18:54.084 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:54.084 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:54.084 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:54.084 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.fK4eOd9TUo' 00:18:54.084 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:54.084 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=75654 00:18:54.084 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:54.084 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:54.084 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 75654 /var/tmp/bdevperf.sock 00:18:54.084 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 75654 ']' 00:18:54.084 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:54.084 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:54.084 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:54.084 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:54.084 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:54.084 [2024-07-25 09:01:01.062887] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:18:54.084 [2024-07-25 09:01:01.063053] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75654 ] 00:18:54.370 [2024-07-25 09:01:01.227455] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:54.370 [2024-07-25 09:01:01.467018] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:54.634 [2024-07-25 09:01:01.668412] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:55.198 09:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:55.198 09:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:55.198 09:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.fK4eOd9TUo 00:18:55.198 [2024-07-25 09:01:02.276255] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:55.198 [2024-07-25 09:01:02.276437] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:55.198 [2024-07-25 09:01:02.290879] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:55.198 [2024-07-25 09:01:02.291444] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (107): Transport endpoint is not connected 00:18:55.198 [2024-07-25 09:01:02.292413] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:18:55.198 [2024-07-25 09:01:02.293402] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:55.198 [2024-07-25 09:01:02.293446] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:55.198 [2024-07-25 09:01:02.293468] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:55.198 request: 00:18:55.198 { 00:18:55.198 "name": "TLSTEST", 00:18:55.198 "trtype": "tcp", 00:18:55.198 "traddr": "10.0.0.2", 00:18:55.198 "adrfam": "ipv4", 00:18:55.198 "trsvcid": "4420", 00:18:55.198 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:55.198 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:55.198 "prchk_reftag": false, 00:18:55.198 "prchk_guard": false, 00:18:55.198 "hdgst": false, 00:18:55.198 "ddgst": false, 00:18:55.198 "psk": "/tmp/tmp.fK4eOd9TUo", 00:18:55.198 "method": "bdev_nvme_attach_controller", 00:18:55.198 "req_id": 1 00:18:55.198 } 00:18:55.198 Got JSON-RPC error response 00:18:55.198 response: 00:18:55.198 { 00:18:55.198 "code": -5, 00:18:55.198 "message": "Input/output error" 00:18:55.198 } 00:18:55.454 09:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 75654 00:18:55.454 09:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 75654 ']' 00:18:55.454 09:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 75654 00:18:55.454 09:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:55.454 09:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:55.454 09:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75654 00:18:55.454 killing process with pid 75654 00:18:55.454 Received shutdown signal, test time was about 10.000000 seconds 00:18:55.454 00:18:55.454 Latency(us) 00:18:55.454 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:55.454 =================================================================================================================== 00:18:55.454 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:55.454 09:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:18:55.454 09:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:18:55.454 09:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75654' 00:18:55.454 09:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 75654 00:18:55.454 [2024-07-25 09:01:02.344491] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:55.454 09:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 75654 00:18:56.391 09:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:56.391 09:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:56.391 09:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:56.391 09:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:56.391 09:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:56.391 09:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.xAdzEVdPhK 00:18:56.391 09:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:56.391 09:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.xAdzEVdPhK 00:18:56.391 09:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:18:56.391 09:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:56.391 09:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:18:56.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:56.391 09:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:56.391 09:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.xAdzEVdPhK 00:18:56.391 09:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:56.391 09:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:56.391 09:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:18:56.391 09:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.xAdzEVdPhK' 00:18:56.391 09:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:56.391 09:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=75688 00:18:56.391 09:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:56.391 09:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 75688 /var/tmp/bdevperf.sock 00:18:56.391 09:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:56.391 09:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 75688 ']' 00:18:56.391 09:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:56.391 09:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:56.391 09:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:56.391 09:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:56.391 09:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:56.652 [2024-07-25 09:01:03.538365] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:18:56.652 [2024-07-25 09:01:03.538580] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75688 ] 00:18:56.652 [2024-07-25 09:01:03.713792] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:56.909 [2024-07-25 09:01:03.953558] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:57.167 [2024-07-25 09:01:04.157874] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:57.425 09:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:57.425 09:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:57.425 09:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.xAdzEVdPhK 00:18:57.683 [2024-07-25 09:01:04.677424] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:57.683 [2024-07-25 09:01:04.677650] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:57.683 [2024-07-25 09:01:04.690899] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:57.683 [2024-07-25 09:01:04.690973] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:57.683 [2024-07-25 09:01:04.691065] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:57.683 [2024-07-25 09:01:04.691605] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (107): Transport endpoint is not connected 00:18:57.683 [2024-07-25 09:01:04.692570] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:18:57.683 [2024-07-25 09:01:04.693570] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:57.683 [2024-07-25 09:01:04.693628] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:57.683 [2024-07-25 09:01:04.693667] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:57.683 request: 00:18:57.683 { 00:18:57.683 "name": "TLSTEST", 00:18:57.683 "trtype": "tcp", 00:18:57.683 "traddr": "10.0.0.2", 00:18:57.683 "adrfam": "ipv4", 00:18:57.683 "trsvcid": "4420", 00:18:57.683 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:57.683 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:57.683 "prchk_reftag": false, 00:18:57.683 "prchk_guard": false, 00:18:57.683 "hdgst": false, 00:18:57.683 "ddgst": false, 00:18:57.683 "psk": "/tmp/tmp.xAdzEVdPhK", 00:18:57.683 "method": "bdev_nvme_attach_controller", 00:18:57.683 "req_id": 1 00:18:57.683 } 00:18:57.684 Got JSON-RPC error response 00:18:57.684 response: 00:18:57.684 { 00:18:57.684 "code": -5, 00:18:57.684 "message": "Input/output error" 00:18:57.684 } 00:18:57.684 09:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 75688 00:18:57.684 09:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 75688 ']' 00:18:57.684 09:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 75688 00:18:57.684 09:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:57.684 09:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:57.684 09:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75688 00:18:57.684 killing process with pid 75688 00:18:57.684 Received shutdown signal, test time was about 10.000000 seconds 00:18:57.684 00:18:57.684 Latency(us) 00:18:57.684 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:57.684 =================================================================================================================== 00:18:57.684 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:57.684 09:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:18:57.684 09:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:18:57.684 09:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75688' 00:18:57.684 09:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 75688 00:18:57.684 09:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 75688 00:18:57.684 [2024-07-25 09:01:04.741475] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:59.057 09:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:59.057 09:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:59.058 09:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:59.058 09:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:59.058 09:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:59.058 09:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.xAdzEVdPhK 00:18:59.058 09:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:59.058 09:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.xAdzEVdPhK 00:18:59.058 09:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:18:59.058 09:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:59.058 09:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:18:59.058 09:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:59.058 09:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.xAdzEVdPhK 00:18:59.058 09:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:59.058 09:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:18:59.058 09:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:59.058 09:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.xAdzEVdPhK' 00:18:59.058 09:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:59.058 09:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=75729 00:18:59.058 09:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:59.058 09:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:59.058 09:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 75729 /var/tmp/bdevperf.sock 00:18:59.058 09:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 75729 ']' 00:18:59.058 09:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:59.058 09:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:59.058 09:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:59.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:59.058 09:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:59.058 09:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:59.058 [2024-07-25 09:01:06.052228] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:18:59.058 [2024-07-25 09:01:06.052431] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75729 ] 00:18:59.342 [2024-07-25 09:01:06.229457] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:59.600 [2024-07-25 09:01:06.475946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:59.600 [2024-07-25 09:01:06.683662] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:59.858 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:59.858 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:59.858 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.xAdzEVdPhK 00:19:00.126 [2024-07-25 09:01:07.185321] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:00.126 [2024-07-25 09:01:07.185511] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:00.126 [2024-07-25 09:01:07.195176] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:00.126 [2024-07-25 09:01:07.195247] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:00.126 [2024-07-25 09:01:07.195319] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:00.126 [2024-07-25 09:01:07.195374] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (107): Transport endpoint is not connected 00:19:00.126 [2024-07-25 09:01:07.196326] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:19:00.126 [2024-07-25 09:01:07.197325] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:19:00.126 [2024-07-25 09:01:07.197366] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:00.126 [2024-07-25 09:01:07.197395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:19:00.126 request: 00:19:00.126 { 00:19:00.126 "name": "TLSTEST", 00:19:00.126 "trtype": "tcp", 00:19:00.126 "traddr": "10.0.0.2", 00:19:00.126 "adrfam": "ipv4", 00:19:00.126 "trsvcid": "4420", 00:19:00.126 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:00.126 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:00.126 "prchk_reftag": false, 00:19:00.126 "prchk_guard": false, 00:19:00.126 "hdgst": false, 00:19:00.126 "ddgst": false, 00:19:00.126 "psk": "/tmp/tmp.xAdzEVdPhK", 00:19:00.126 "method": "bdev_nvme_attach_controller", 00:19:00.126 "req_id": 1 00:19:00.126 } 00:19:00.126 Got JSON-RPC error response 00:19:00.126 response: 00:19:00.126 { 00:19:00.126 "code": -5, 00:19:00.126 "message": "Input/output error" 00:19:00.126 } 00:19:00.126 09:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 75729 00:19:00.126 09:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 75729 ']' 00:19:00.126 09:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 75729 00:19:00.126 09:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:00.126 09:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:00.126 09:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75729 00:19:00.383 09:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:00.383 killing process with pid 75729 00:19:00.383 09:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:00.383 09:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75729' 00:19:00.383 Received shutdown signal, test time was about 10.000000 seconds 00:19:00.383 00:19:00.383 Latency(us) 00:19:00.383 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:00.383 =================================================================================================================== 00:19:00.383 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:00.383 09:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 75729 00:19:00.383 [2024-07-25 09:01:07.246708] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:00.383 09:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 75729 00:19:01.316 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:01.316 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:01.316 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:01.316 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:01.316 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:01.316 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:01.316 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:01.316 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:01.316 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:01.316 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:01.316 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:01.316 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:01.316 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:01.316 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:01.316 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:01.316 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:01.316 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:19:01.316 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:01.316 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=75763 00:19:01.316 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:01.316 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:01.316 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 75763 /var/tmp/bdevperf.sock 00:19:01.316 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 75763 ']' 00:19:01.316 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:01.316 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:01.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:01.316 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:01.316 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:01.316 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:01.575 [2024-07-25 09:01:08.461984] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:19:01.575 [2024-07-25 09:01:08.462151] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75763 ] 00:19:01.575 [2024-07-25 09:01:08.627680] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:01.834 [2024-07-25 09:01:08.864072] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:02.092 [2024-07-25 09:01:09.068082] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:02.351 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:02.351 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:02.351 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:02.611 [2024-07-25 09:01:09.622119] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:02.611 [2024-07-25 09:01:09.623411] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:19:02.611 [2024-07-25 09:01:09.624398] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:02.611 [2024-07-25 09:01:09.624448] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:02.611 [2024-07-25 09:01:09.624470] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:02.611 request: 00:19:02.611 { 00:19:02.611 "name": "TLSTEST", 00:19:02.611 "trtype": "tcp", 00:19:02.611 "traddr": "10.0.0.2", 00:19:02.611 "adrfam": "ipv4", 00:19:02.611 "trsvcid": "4420", 00:19:02.611 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:02.611 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:02.611 "prchk_reftag": false, 00:19:02.611 "prchk_guard": false, 00:19:02.611 "hdgst": false, 00:19:02.611 "ddgst": false, 00:19:02.611 "method": "bdev_nvme_attach_controller", 00:19:02.611 "req_id": 1 00:19:02.611 } 00:19:02.611 Got JSON-RPC error response 00:19:02.611 response: 00:19:02.611 { 00:19:02.611 "code": -5, 00:19:02.611 "message": "Input/output error" 00:19:02.611 } 00:19:02.611 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 75763 00:19:02.611 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 75763 ']' 00:19:02.611 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 75763 00:19:02.611 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:02.611 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:02.611 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75763 00:19:02.611 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:02.611 killing process with pid 75763 00:19:02.611 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:02.611 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75763' 00:19:02.611 Received shutdown signal, test time was about 10.000000 seconds 00:19:02.611 00:19:02.611 Latency(us) 00:19:02.611 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:02.611 =================================================================================================================== 00:19:02.611 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:02.611 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 75763 00:19:02.611 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 75763 00:19:04.003 09:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:04.003 09:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:04.003 09:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:04.003 09:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:04.003 09:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:04.003 09:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@158 -- # killprocess 75276 00:19:04.003 09:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 75276 ']' 00:19:04.003 09:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 75276 00:19:04.003 09:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:04.003 09:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:04.003 09:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75276 00:19:04.003 killing process with pid 75276 00:19:04.003 09:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:04.003 09:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:04.003 09:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75276' 00:19:04.003 09:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 75276 00:19:04.003 [2024-07-25 09:01:10.873101] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:04.003 09:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 75276 00:19:05.434 09:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:19:05.434 09:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:19:05.434 09:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:19:05.434 09:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:19:05.434 09:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:05.434 09:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:19:05.434 09:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:19:05.434 09:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:05.434 09:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:19:05.434 09:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.wYjyZPEOU0 00:19:05.434 09:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:05.434 09:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.wYjyZPEOU0 00:19:05.434 09:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:19:05.434 09:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:05.434 09:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:05.434 09:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:05.434 09:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=75825 00:19:05.434 09:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 75825 00:19:05.434 09:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:05.434 09:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 75825 ']' 00:19:05.434 09:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:05.434 09:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:05.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:05.434 09:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:05.434 09:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:05.434 09:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:05.434 [2024-07-25 09:01:12.418744] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:19:05.434 [2024-07-25 09:01:12.418924] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:05.694 [2024-07-25 09:01:12.601709] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:05.952 [2024-07-25 09:01:12.880355] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:05.952 [2024-07-25 09:01:12.880431] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:05.952 [2024-07-25 09:01:12.880450] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:05.952 [2024-07-25 09:01:12.880466] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:05.952 [2024-07-25 09:01:12.880478] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:05.952 [2024-07-25 09:01:12.880529] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:06.210 [2024-07-25 09:01:13.087794] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:06.466 09:01:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:06.466 09:01:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:06.466 09:01:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:06.466 09:01:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:06.466 09:01:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:06.466 09:01:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:06.466 09:01:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.wYjyZPEOU0 00:19:06.466 09:01:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.wYjyZPEOU0 00:19:06.466 09:01:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:06.724 [2024-07-25 09:01:13.595327] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:06.724 09:01:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:06.982 09:01:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:07.301 [2024-07-25 09:01:14.160238] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:07.301 [2024-07-25 09:01:14.160662] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:07.301 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:07.575 malloc0 00:19:07.575 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:07.833 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.wYjyZPEOU0 00:19:08.092 [2024-07-25 09:01:14.990403] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:08.092 09:01:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.wYjyZPEOU0 00:19:08.092 09:01:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:08.092 09:01:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:08.092 09:01:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:08.092 09:01:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.wYjyZPEOU0' 00:19:08.092 09:01:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:08.092 09:01:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=75880 00:19:08.092 09:01:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:08.092 09:01:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:08.092 09:01:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 75880 /var/tmp/bdevperf.sock 00:19:08.092 09:01:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 75880 ']' 00:19:08.092 09:01:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:08.092 09:01:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:08.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:08.092 09:01:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:08.092 09:01:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:08.092 09:01:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:08.092 [2024-07-25 09:01:15.118746] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:19:08.092 [2024-07-25 09:01:15.118949] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75880 ] 00:19:08.350 [2024-07-25 09:01:15.290575] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:08.609 [2024-07-25 09:01:15.529936] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:08.867 [2024-07-25 09:01:15.733685] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:09.126 09:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:09.126 09:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:09.126 09:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.wYjyZPEOU0 00:19:09.385 [2024-07-25 09:01:16.258677] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:09.385 [2024-07-25 09:01:16.258893] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:09.385 TLSTESTn1 00:19:09.385 09:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:09.385 Running I/O for 10 seconds... 00:19:21.592 00:19:21.592 Latency(us) 00:19:21.592 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:21.592 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:21.592 Verification LBA range: start 0x0 length 0x2000 00:19:21.592 TLSTESTn1 : 10.04 2562.59 10.01 0.00 0.00 49840.39 8936.73 50998.92 00:19:21.592 =================================================================================================================== 00:19:21.592 Total : 2562.59 10.01 0.00 0.00 49840.39 8936.73 50998.92 00:19:21.592 0 00:19:21.592 09:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:21.592 09:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 75880 00:19:21.592 09:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 75880 ']' 00:19:21.592 09:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 75880 00:19:21.592 09:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:21.592 09:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:21.592 09:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75880 00:19:21.592 killing process with pid 75880 00:19:21.592 Received shutdown signal, test time was about 10.000000 seconds 00:19:21.592 00:19:21.593 Latency(us) 00:19:21.593 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:21.593 =================================================================================================================== 00:19:21.593 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:21.593 09:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:21.593 09:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:21.593 09:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75880' 00:19:21.593 09:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 75880 00:19:21.593 09:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 75880 00:19:21.593 [2024-07-25 09:01:26.580919] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:21.593 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.wYjyZPEOU0 00:19:21.593 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.wYjyZPEOU0 00:19:21.593 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:21.593 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.wYjyZPEOU0 00:19:21.593 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:21.593 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:21.593 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:21.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:21.593 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:21.593 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.wYjyZPEOU0 00:19:21.593 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:21.593 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:21.593 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:21.593 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.wYjyZPEOU0' 00:19:21.593 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:21.593 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=76021 00:19:21.593 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:21.593 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:21.593 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 76021 /var/tmp/bdevperf.sock 00:19:21.593 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 76021 ']' 00:19:21.593 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:21.593 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:21.593 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:21.593 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:21.593 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:21.593 [2024-07-25 09:01:27.833781] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:19:21.593 [2024-07-25 09:01:27.833980] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76021 ] 00:19:21.593 [2024-07-25 09:01:27.999995] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:21.593 [2024-07-25 09:01:28.264526] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:21.593 [2024-07-25 09:01:28.466489] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:21.593 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:21.593 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:21.593 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.wYjyZPEOU0 00:19:21.855 [2024-07-25 09:01:28.942641] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:21.855 [2024-07-25 09:01:28.942751] bdev_nvme.c:6153:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:19:21.855 [2024-07-25 09:01:28.942769] bdev_nvme.c:6258:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.wYjyZPEOU0 00:19:21.855 request: 00:19:21.855 { 00:19:21.855 "name": "TLSTEST", 00:19:21.855 "trtype": "tcp", 00:19:21.855 "traddr": "10.0.0.2", 00:19:21.855 "adrfam": "ipv4", 00:19:21.855 "trsvcid": "4420", 00:19:21.855 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:21.855 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:21.855 "prchk_reftag": false, 00:19:21.855 "prchk_guard": false, 00:19:21.855 "hdgst": false, 00:19:21.855 "ddgst": false, 00:19:21.855 "psk": "/tmp/tmp.wYjyZPEOU0", 00:19:21.855 "method": "bdev_nvme_attach_controller", 00:19:21.855 "req_id": 1 00:19:21.855 } 00:19:21.855 Got JSON-RPC error response 00:19:21.855 response: 00:19:21.855 { 00:19:21.855 "code": -1, 00:19:21.855 "message": "Operation not permitted" 00:19:21.855 } 00:19:22.114 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 76021 00:19:22.114 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 76021 ']' 00:19:22.114 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 76021 00:19:22.114 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:22.114 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:22.114 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76021 00:19:22.114 killing process with pid 76021 00:19:22.114 Received shutdown signal, test time was about 10.000000 seconds 00:19:22.114 00:19:22.114 Latency(us) 00:19:22.114 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:22.114 =================================================================================================================== 00:19:22.114 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:22.114 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:22.114 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:22.114 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76021' 00:19:22.114 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 76021 00:19:22.114 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 76021 00:19:23.058 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:23.058 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:23.058 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:23.058 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:23.058 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:23.058 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@174 -- # killprocess 75825 00:19:23.058 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 75825 ']' 00:19:23.058 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 75825 00:19:23.058 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:23.058 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:23.058 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75825 00:19:23.316 killing process with pid 75825 00:19:23.316 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:23.316 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:23.316 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75825' 00:19:23.316 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 75825 00:19:23.316 [2024-07-25 09:01:30.188800] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:23.316 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 75825 00:19:24.702 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:19:24.703 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:24.703 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:24.703 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:24.703 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=76084 00:19:24.703 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:24.703 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 76084 00:19:24.703 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 76084 ']' 00:19:24.703 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:24.703 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:24.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:24.703 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:24.703 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:24.703 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:24.703 [2024-07-25 09:01:31.645007] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:19:24.703 [2024-07-25 09:01:31.645155] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:24.703 [2024-07-25 09:01:31.806361] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:24.960 [2024-07-25 09:01:32.055930] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:24.960 [2024-07-25 09:01:32.055994] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:24.960 [2024-07-25 09:01:32.056011] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:24.960 [2024-07-25 09:01:32.056027] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:24.960 [2024-07-25 09:01:32.056039] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:24.960 [2024-07-25 09:01:32.056088] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:25.217 [2024-07-25 09:01:32.263170] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:25.474 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:25.474 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:25.474 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:25.474 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:25.474 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:25.731 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:25.732 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.wYjyZPEOU0 00:19:25.732 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:25.732 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.wYjyZPEOU0 00:19:25.732 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:19:25.732 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:25.732 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:19:25.732 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:25.732 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.wYjyZPEOU0 00:19:25.732 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.wYjyZPEOU0 00:19:25.732 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:25.989 [2024-07-25 09:01:32.874945] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:25.989 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:26.246 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:26.503 [2024-07-25 09:01:33.463133] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:26.503 [2024-07-25 09:01:33.463420] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:26.503 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:26.760 malloc0 00:19:26.760 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:27.017 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.wYjyZPEOU0 00:19:27.273 [2024-07-25 09:01:34.218046] tcp.c:3635:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:19:27.273 [2024-07-25 09:01:34.218111] tcp.c:3721:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:19:27.273 [2024-07-25 09:01:34.218148] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:19:27.273 request: 00:19:27.273 { 00:19:27.273 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:27.273 "host": "nqn.2016-06.io.spdk:host1", 00:19:27.273 "psk": "/tmp/tmp.wYjyZPEOU0", 00:19:27.273 "method": "nvmf_subsystem_add_host", 00:19:27.273 "req_id": 1 00:19:27.273 } 00:19:27.273 Got JSON-RPC error response 00:19:27.273 response: 00:19:27.273 { 00:19:27.273 "code": -32603, 00:19:27.273 "message": "Internal error" 00:19:27.273 } 00:19:27.273 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:27.273 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:27.273 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:27.273 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:27.273 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@180 -- # killprocess 76084 00:19:27.273 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 76084 ']' 00:19:27.273 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 76084 00:19:27.273 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:27.273 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:27.273 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76084 00:19:27.273 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:27.273 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:27.273 killing process with pid 76084 00:19:27.273 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76084' 00:19:27.273 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 76084 00:19:27.273 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 76084 00:19:28.643 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.wYjyZPEOU0 00:19:28.643 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:19:28.643 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:28.643 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:28.643 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:28.643 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=76159 00:19:28.643 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:28.643 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 76159 00:19:28.643 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 76159 ']' 00:19:28.643 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:28.643 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:28.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:28.643 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:28.643 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:28.643 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:28.643 [2024-07-25 09:01:35.710056] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:19:28.643 [2024-07-25 09:01:35.710197] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:28.913 [2024-07-25 09:01:35.882633] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:29.170 [2024-07-25 09:01:36.122286] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:29.170 [2024-07-25 09:01:36.122362] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:29.170 [2024-07-25 09:01:36.122380] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:29.170 [2024-07-25 09:01:36.122396] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:29.170 [2024-07-25 09:01:36.122408] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:29.170 [2024-07-25 09:01:36.122461] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:29.428 [2024-07-25 09:01:36.380371] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:29.685 09:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:29.685 09:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:29.685 09:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:29.685 09:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:29.685 09:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:29.685 09:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:29.685 09:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.wYjyZPEOU0 00:19:29.685 09:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.wYjyZPEOU0 00:19:29.685 09:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:29.941 [2024-07-25 09:01:36.863842] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:29.941 09:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:30.199 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:30.457 [2024-07-25 09:01:37.424018] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:30.457 [2024-07-25 09:01:37.424337] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:30.457 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:30.717 malloc0 00:19:30.717 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:30.976 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.wYjyZPEOU0 00:19:31.235 [2024-07-25 09:01:38.149876] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:31.235 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=76219 00:19:31.235 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:31.235 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:31.235 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 76219 /var/tmp/bdevperf.sock 00:19:31.235 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 76219 ']' 00:19:31.235 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:31.235 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:31.235 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:31.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:31.235 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:31.235 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:31.235 [2024-07-25 09:01:38.259170] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:19:31.235 [2024-07-25 09:01:38.259332] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76219 ] 00:19:31.493 [2024-07-25 09:01:38.430832] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:31.751 [2024-07-25 09:01:38.732362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:32.007 [2024-07-25 09:01:38.939096] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:32.266 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:32.266 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:32.266 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.wYjyZPEOU0 00:19:32.524 [2024-07-25 09:01:39.517277] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:32.524 [2024-07-25 09:01:39.517464] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:32.524 TLSTESTn1 00:19:32.524 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:19:33.090 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:19:33.090 "subsystems": [ 00:19:33.090 { 00:19:33.090 "subsystem": "keyring", 00:19:33.090 "config": [] 00:19:33.090 }, 00:19:33.090 { 00:19:33.090 "subsystem": "iobuf", 00:19:33.090 "config": [ 00:19:33.090 { 00:19:33.090 "method": "iobuf_set_options", 00:19:33.090 "params": { 00:19:33.090 "small_pool_count": 8192, 00:19:33.090 "large_pool_count": 1024, 00:19:33.090 "small_bufsize": 8192, 00:19:33.090 "large_bufsize": 135168 00:19:33.090 } 00:19:33.090 } 00:19:33.090 ] 00:19:33.090 }, 00:19:33.090 { 00:19:33.090 "subsystem": "sock", 00:19:33.090 "config": [ 00:19:33.090 { 00:19:33.090 "method": "sock_set_default_impl", 00:19:33.090 "params": { 00:19:33.090 "impl_name": "uring" 00:19:33.090 } 00:19:33.090 }, 00:19:33.090 { 00:19:33.090 "method": "sock_impl_set_options", 00:19:33.090 "params": { 00:19:33.090 "impl_name": "ssl", 00:19:33.090 "recv_buf_size": 4096, 00:19:33.090 "send_buf_size": 4096, 00:19:33.090 "enable_recv_pipe": true, 00:19:33.090 "enable_quickack": false, 00:19:33.090 "enable_placement_id": 0, 00:19:33.090 "enable_zerocopy_send_server": true, 00:19:33.090 "enable_zerocopy_send_client": false, 00:19:33.090 "zerocopy_threshold": 0, 00:19:33.090 "tls_version": 0, 00:19:33.090 "enable_ktls": false 00:19:33.090 } 00:19:33.090 }, 00:19:33.090 { 00:19:33.090 "method": "sock_impl_set_options", 00:19:33.090 "params": { 00:19:33.090 "impl_name": "posix", 00:19:33.090 "recv_buf_size": 2097152, 00:19:33.090 "send_buf_size": 2097152, 00:19:33.090 "enable_recv_pipe": true, 00:19:33.090 "enable_quickack": false, 00:19:33.090 "enable_placement_id": 0, 00:19:33.090 "enable_zerocopy_send_server": true, 00:19:33.090 "enable_zerocopy_send_client": false, 00:19:33.090 "zerocopy_threshold": 0, 00:19:33.090 "tls_version": 0, 00:19:33.090 "enable_ktls": false 00:19:33.090 } 00:19:33.090 }, 00:19:33.090 { 00:19:33.090 "method": "sock_impl_set_options", 00:19:33.090 "params": { 00:19:33.090 "impl_name": "uring", 00:19:33.090 "recv_buf_size": 2097152, 00:19:33.090 "send_buf_size": 2097152, 00:19:33.090 "enable_recv_pipe": true, 00:19:33.090 "enable_quickack": false, 00:19:33.090 "enable_placement_id": 0, 00:19:33.090 "enable_zerocopy_send_server": false, 00:19:33.090 "enable_zerocopy_send_client": false, 00:19:33.090 "zerocopy_threshold": 0, 00:19:33.090 "tls_version": 0, 00:19:33.090 "enable_ktls": false 00:19:33.090 } 00:19:33.090 } 00:19:33.090 ] 00:19:33.090 }, 00:19:33.090 { 00:19:33.090 "subsystem": "vmd", 00:19:33.090 "config": [] 00:19:33.090 }, 00:19:33.090 { 00:19:33.090 "subsystem": "accel", 00:19:33.090 "config": [ 00:19:33.090 { 00:19:33.090 "method": "accel_set_options", 00:19:33.090 "params": { 00:19:33.090 "small_cache_size": 128, 00:19:33.090 "large_cache_size": 16, 00:19:33.090 "task_count": 2048, 00:19:33.090 "sequence_count": 2048, 00:19:33.090 "buf_count": 2048 00:19:33.090 } 00:19:33.090 } 00:19:33.090 ] 00:19:33.090 }, 00:19:33.090 { 00:19:33.090 "subsystem": "bdev", 00:19:33.090 "config": [ 00:19:33.090 { 00:19:33.090 "method": "bdev_set_options", 00:19:33.090 "params": { 00:19:33.090 "bdev_io_pool_size": 65535, 00:19:33.090 "bdev_io_cache_size": 256, 00:19:33.090 "bdev_auto_examine": true, 00:19:33.090 "iobuf_small_cache_size": 128, 00:19:33.090 "iobuf_large_cache_size": 16 00:19:33.090 } 00:19:33.090 }, 00:19:33.090 { 00:19:33.090 "method": "bdev_raid_set_options", 00:19:33.090 "params": { 00:19:33.090 "process_window_size_kb": 1024, 00:19:33.090 "process_max_bandwidth_mb_sec": 0 00:19:33.090 } 00:19:33.090 }, 00:19:33.090 { 00:19:33.090 "method": "bdev_iscsi_set_options", 00:19:33.090 "params": { 00:19:33.090 "timeout_sec": 30 00:19:33.090 } 00:19:33.090 }, 00:19:33.090 { 00:19:33.090 "method": "bdev_nvme_set_options", 00:19:33.090 "params": { 00:19:33.090 "action_on_timeout": "none", 00:19:33.090 "timeout_us": 0, 00:19:33.090 "timeout_admin_us": 0, 00:19:33.090 "keep_alive_timeout_ms": 10000, 00:19:33.090 "arbitration_burst": 0, 00:19:33.090 "low_priority_weight": 0, 00:19:33.090 "medium_priority_weight": 0, 00:19:33.090 "high_priority_weight": 0, 00:19:33.090 "nvme_adminq_poll_period_us": 10000, 00:19:33.090 "nvme_ioq_poll_period_us": 0, 00:19:33.090 "io_queue_requests": 0, 00:19:33.090 "delay_cmd_submit": true, 00:19:33.090 "transport_retry_count": 4, 00:19:33.090 "bdev_retry_count": 3, 00:19:33.090 "transport_ack_timeout": 0, 00:19:33.090 "ctrlr_loss_timeout_sec": 0, 00:19:33.090 "reconnect_delay_sec": 0, 00:19:33.090 "fast_io_fail_timeout_sec": 0, 00:19:33.090 "disable_auto_failback": false, 00:19:33.090 "generate_uuids": false, 00:19:33.090 "transport_tos": 0, 00:19:33.090 "nvme_error_stat": false, 00:19:33.090 "rdma_srq_size": 0, 00:19:33.090 "io_path_stat": false, 00:19:33.090 "allow_accel_sequence": false, 00:19:33.090 "rdma_max_cq_size": 0, 00:19:33.090 "rdma_cm_event_timeout_ms": 0, 00:19:33.090 "dhchap_digests": [ 00:19:33.090 "sha256", 00:19:33.090 "sha384", 00:19:33.090 "sha512" 00:19:33.090 ], 00:19:33.090 "dhchap_dhgroups": [ 00:19:33.090 "null", 00:19:33.090 "ffdhe2048", 00:19:33.090 "ffdhe3072", 00:19:33.090 "ffdhe4096", 00:19:33.090 "ffdhe6144", 00:19:33.090 "ffdhe8192" 00:19:33.090 ] 00:19:33.090 } 00:19:33.090 }, 00:19:33.090 { 00:19:33.090 "method": "bdev_nvme_set_hotplug", 00:19:33.090 "params": { 00:19:33.090 "period_us": 100000, 00:19:33.090 "enable": false 00:19:33.090 } 00:19:33.090 }, 00:19:33.090 { 00:19:33.090 "method": "bdev_malloc_create", 00:19:33.090 "params": { 00:19:33.090 "name": "malloc0", 00:19:33.090 "num_blocks": 8192, 00:19:33.090 "block_size": 4096, 00:19:33.090 "physical_block_size": 4096, 00:19:33.090 "uuid": "379564c6-0f15-459f-af7c-7ebe6370d839", 00:19:33.090 "optimal_io_boundary": 0, 00:19:33.090 "md_size": 0, 00:19:33.090 "dif_type": 0, 00:19:33.090 "dif_is_head_of_md": false, 00:19:33.090 "dif_pi_format": 0 00:19:33.090 } 00:19:33.090 }, 00:19:33.090 { 00:19:33.090 "method": "bdev_wait_for_examine" 00:19:33.090 } 00:19:33.090 ] 00:19:33.090 }, 00:19:33.090 { 00:19:33.090 "subsystem": "nbd", 00:19:33.090 "config": [] 00:19:33.090 }, 00:19:33.090 { 00:19:33.091 "subsystem": "scheduler", 00:19:33.091 "config": [ 00:19:33.091 { 00:19:33.091 "method": "framework_set_scheduler", 00:19:33.091 "params": { 00:19:33.091 "name": "static" 00:19:33.091 } 00:19:33.091 } 00:19:33.091 ] 00:19:33.091 }, 00:19:33.091 { 00:19:33.091 "subsystem": "nvmf", 00:19:33.091 "config": [ 00:19:33.091 { 00:19:33.091 "method": "nvmf_set_config", 00:19:33.091 "params": { 00:19:33.091 "discovery_filter": "match_any", 00:19:33.091 "admin_cmd_passthru": { 00:19:33.091 "identify_ctrlr": false 00:19:33.091 } 00:19:33.091 } 00:19:33.091 }, 00:19:33.091 { 00:19:33.091 "method": "nvmf_set_max_subsystems", 00:19:33.091 "params": { 00:19:33.091 "max_subsystems": 1024 00:19:33.091 } 00:19:33.091 }, 00:19:33.091 { 00:19:33.091 "method": "nvmf_set_crdt", 00:19:33.091 "params": { 00:19:33.091 "crdt1": 0, 00:19:33.091 "crdt2": 0, 00:19:33.091 "crdt3": 0 00:19:33.091 } 00:19:33.091 }, 00:19:33.091 { 00:19:33.091 "method": "nvmf_create_transport", 00:19:33.091 "params": { 00:19:33.091 "trtype": "TCP", 00:19:33.091 "max_queue_depth": 128, 00:19:33.091 "max_io_qpairs_per_ctrlr": 127, 00:19:33.091 "in_capsule_data_size": 4096, 00:19:33.091 "max_io_size": 131072, 00:19:33.091 "io_unit_size": 131072, 00:19:33.091 "max_aq_depth": 128, 00:19:33.091 "num_shared_buffers": 511, 00:19:33.091 "buf_cache_size": 4294967295, 00:19:33.091 "dif_insert_or_strip": false, 00:19:33.091 "zcopy": false, 00:19:33.091 "c2h_success": false, 00:19:33.091 "sock_priority": 0, 00:19:33.091 "abort_timeout_sec": 1, 00:19:33.091 "ack_timeout": 0, 00:19:33.091 "data_wr_pool_size": 0 00:19:33.091 } 00:19:33.091 }, 00:19:33.091 { 00:19:33.091 "method": "nvmf_create_subsystem", 00:19:33.091 "params": { 00:19:33.091 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:33.091 "allow_any_host": false, 00:19:33.091 "serial_number": "SPDK00000000000001", 00:19:33.091 "model_number": "SPDK bdev Controller", 00:19:33.091 "max_namespaces": 10, 00:19:33.091 "min_cntlid": 1, 00:19:33.091 "max_cntlid": 65519, 00:19:33.091 "ana_reporting": false 00:19:33.091 } 00:19:33.091 }, 00:19:33.091 { 00:19:33.091 "method": "nvmf_subsystem_add_host", 00:19:33.091 "params": { 00:19:33.091 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:33.091 "host": "nqn.2016-06.io.spdk:host1", 00:19:33.091 "psk": "/tmp/tmp.wYjyZPEOU0" 00:19:33.091 } 00:19:33.091 }, 00:19:33.091 { 00:19:33.091 "method": "nvmf_subsystem_add_ns", 00:19:33.091 "params": { 00:19:33.091 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:33.091 "namespace": { 00:19:33.091 "nsid": 1, 00:19:33.091 "bdev_name": "malloc0", 00:19:33.091 "nguid": "379564C60F15459FAF7C7EBE6370D839", 00:19:33.091 "uuid": "379564c6-0f15-459f-af7c-7ebe6370d839", 00:19:33.091 "no_auto_visible": false 00:19:33.091 } 00:19:33.091 } 00:19:33.091 }, 00:19:33.091 { 00:19:33.091 "method": "nvmf_subsystem_add_listener", 00:19:33.091 "params": { 00:19:33.091 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:33.091 "listen_address": { 00:19:33.091 "trtype": "TCP", 00:19:33.091 "adrfam": "IPv4", 00:19:33.091 "traddr": "10.0.0.2", 00:19:33.091 "trsvcid": "4420" 00:19:33.091 }, 00:19:33.091 "secure_channel": true 00:19:33.091 } 00:19:33.091 } 00:19:33.091 ] 00:19:33.091 } 00:19:33.091 ] 00:19:33.091 }' 00:19:33.091 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:33.349 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:19:33.349 "subsystems": [ 00:19:33.349 { 00:19:33.349 "subsystem": "keyring", 00:19:33.349 "config": [] 00:19:33.349 }, 00:19:33.349 { 00:19:33.349 "subsystem": "iobuf", 00:19:33.349 "config": [ 00:19:33.349 { 00:19:33.349 "method": "iobuf_set_options", 00:19:33.349 "params": { 00:19:33.349 "small_pool_count": 8192, 00:19:33.349 "large_pool_count": 1024, 00:19:33.349 "small_bufsize": 8192, 00:19:33.349 "large_bufsize": 135168 00:19:33.349 } 00:19:33.349 } 00:19:33.349 ] 00:19:33.349 }, 00:19:33.349 { 00:19:33.349 "subsystem": "sock", 00:19:33.349 "config": [ 00:19:33.349 { 00:19:33.349 "method": "sock_set_default_impl", 00:19:33.349 "params": { 00:19:33.349 "impl_name": "uring" 00:19:33.349 } 00:19:33.349 }, 00:19:33.349 { 00:19:33.349 "method": "sock_impl_set_options", 00:19:33.349 "params": { 00:19:33.349 "impl_name": "ssl", 00:19:33.349 "recv_buf_size": 4096, 00:19:33.349 "send_buf_size": 4096, 00:19:33.349 "enable_recv_pipe": true, 00:19:33.349 "enable_quickack": false, 00:19:33.349 "enable_placement_id": 0, 00:19:33.349 "enable_zerocopy_send_server": true, 00:19:33.349 "enable_zerocopy_send_client": false, 00:19:33.349 "zerocopy_threshold": 0, 00:19:33.349 "tls_version": 0, 00:19:33.349 "enable_ktls": false 00:19:33.349 } 00:19:33.349 }, 00:19:33.349 { 00:19:33.349 "method": "sock_impl_set_options", 00:19:33.349 "params": { 00:19:33.349 "impl_name": "posix", 00:19:33.349 "recv_buf_size": 2097152, 00:19:33.349 "send_buf_size": 2097152, 00:19:33.349 "enable_recv_pipe": true, 00:19:33.349 "enable_quickack": false, 00:19:33.349 "enable_placement_id": 0, 00:19:33.349 "enable_zerocopy_send_server": true, 00:19:33.349 "enable_zerocopy_send_client": false, 00:19:33.349 "zerocopy_threshold": 0, 00:19:33.349 "tls_version": 0, 00:19:33.349 "enable_ktls": false 00:19:33.349 } 00:19:33.349 }, 00:19:33.349 { 00:19:33.349 "method": "sock_impl_set_options", 00:19:33.349 "params": { 00:19:33.349 "impl_name": "uring", 00:19:33.349 "recv_buf_size": 2097152, 00:19:33.349 "send_buf_size": 2097152, 00:19:33.349 "enable_recv_pipe": true, 00:19:33.349 "enable_quickack": false, 00:19:33.349 "enable_placement_id": 0, 00:19:33.349 "enable_zerocopy_send_server": false, 00:19:33.349 "enable_zerocopy_send_client": false, 00:19:33.349 "zerocopy_threshold": 0, 00:19:33.349 "tls_version": 0, 00:19:33.349 "enable_ktls": false 00:19:33.349 } 00:19:33.349 } 00:19:33.349 ] 00:19:33.349 }, 00:19:33.349 { 00:19:33.349 "subsystem": "vmd", 00:19:33.349 "config": [] 00:19:33.349 }, 00:19:33.349 { 00:19:33.349 "subsystem": "accel", 00:19:33.349 "config": [ 00:19:33.349 { 00:19:33.349 "method": "accel_set_options", 00:19:33.349 "params": { 00:19:33.349 "small_cache_size": 128, 00:19:33.349 "large_cache_size": 16, 00:19:33.349 "task_count": 2048, 00:19:33.349 "sequence_count": 2048, 00:19:33.349 "buf_count": 2048 00:19:33.349 } 00:19:33.349 } 00:19:33.349 ] 00:19:33.349 }, 00:19:33.349 { 00:19:33.349 "subsystem": "bdev", 00:19:33.349 "config": [ 00:19:33.349 { 00:19:33.349 "method": "bdev_set_options", 00:19:33.349 "params": { 00:19:33.349 "bdev_io_pool_size": 65535, 00:19:33.349 "bdev_io_cache_size": 256, 00:19:33.349 "bdev_auto_examine": true, 00:19:33.349 "iobuf_small_cache_size": 128, 00:19:33.349 "iobuf_large_cache_size": 16 00:19:33.349 } 00:19:33.349 }, 00:19:33.349 { 00:19:33.349 "method": "bdev_raid_set_options", 00:19:33.349 "params": { 00:19:33.349 "process_window_size_kb": 1024, 00:19:33.349 "process_max_bandwidth_mb_sec": 0 00:19:33.349 } 00:19:33.349 }, 00:19:33.349 { 00:19:33.349 "method": "bdev_iscsi_set_options", 00:19:33.349 "params": { 00:19:33.349 "timeout_sec": 30 00:19:33.349 } 00:19:33.349 }, 00:19:33.349 { 00:19:33.349 "method": "bdev_nvme_set_options", 00:19:33.350 "params": { 00:19:33.350 "action_on_timeout": "none", 00:19:33.350 "timeout_us": 0, 00:19:33.350 "timeout_admin_us": 0, 00:19:33.350 "keep_alive_timeout_ms": 10000, 00:19:33.350 "arbitration_burst": 0, 00:19:33.350 "low_priority_weight": 0, 00:19:33.350 "medium_priority_weight": 0, 00:19:33.350 "high_priority_weight": 0, 00:19:33.350 "nvme_adminq_poll_period_us": 10000, 00:19:33.350 "nvme_ioq_poll_period_us": 0, 00:19:33.350 "io_queue_requests": 512, 00:19:33.350 "delay_cmd_submit": true, 00:19:33.350 "transport_retry_count": 4, 00:19:33.350 "bdev_retry_count": 3, 00:19:33.350 "transport_ack_timeout": 0, 00:19:33.350 "ctrlr_loss_timeout_sec": 0, 00:19:33.350 "reconnect_delay_sec": 0, 00:19:33.350 "fast_io_fail_timeout_sec": 0, 00:19:33.350 "disable_auto_failback": false, 00:19:33.350 "generate_uuids": false, 00:19:33.350 "transport_tos": 0, 00:19:33.350 "nvme_error_stat": false, 00:19:33.350 "rdma_srq_size": 0, 00:19:33.350 "io_path_stat": false, 00:19:33.350 "allow_accel_sequence": false, 00:19:33.350 "rdma_max_cq_size": 0, 00:19:33.350 "rdma_cm_event_timeout_ms": 0, 00:19:33.350 "dhchap_digests": [ 00:19:33.350 "sha256", 00:19:33.350 "sha384", 00:19:33.350 "sha512" 00:19:33.350 ], 00:19:33.350 "dhchap_dhgroups": [ 00:19:33.350 "null", 00:19:33.350 "ffdhe2048", 00:19:33.350 "ffdhe3072", 00:19:33.350 "ffdhe4096", 00:19:33.350 "ffdhe6144", 00:19:33.350 "ffdhe8192" 00:19:33.350 ] 00:19:33.350 } 00:19:33.350 }, 00:19:33.350 { 00:19:33.350 "method": "bdev_nvme_attach_controller", 00:19:33.350 "params": { 00:19:33.350 "name": "TLSTEST", 00:19:33.350 "trtype": "TCP", 00:19:33.350 "adrfam": "IPv4", 00:19:33.350 "traddr": "10.0.0.2", 00:19:33.350 "trsvcid": "4420", 00:19:33.350 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:33.350 "prchk_reftag": false, 00:19:33.350 "prchk_guard": false, 00:19:33.350 "ctrlr_loss_timeout_sec": 0, 00:19:33.350 "reconnect_delay_sec": 0, 00:19:33.350 "fast_io_fail_timeout_sec": 0, 00:19:33.350 "psk": "/tmp/tmp.wYjyZPEOU0", 00:19:33.350 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:33.350 "hdgst": false, 00:19:33.350 "ddgst": false 00:19:33.350 } 00:19:33.350 }, 00:19:33.350 { 00:19:33.350 "method": "bdev_nvme_set_hotplug", 00:19:33.350 "params": { 00:19:33.350 "period_us": 100000, 00:19:33.350 "enable": false 00:19:33.350 } 00:19:33.350 }, 00:19:33.350 { 00:19:33.350 "method": "bdev_wait_for_examine" 00:19:33.350 } 00:19:33.350 ] 00:19:33.350 }, 00:19:33.350 { 00:19:33.350 "subsystem": "nbd", 00:19:33.350 "config": [] 00:19:33.350 } 00:19:33.350 ] 00:19:33.350 }' 00:19:33.350 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # killprocess 76219 00:19:33.350 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 76219 ']' 00:19:33.350 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 76219 00:19:33.350 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:33.350 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:33.350 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76219 00:19:33.350 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:33.350 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:33.350 killing process with pid 76219 00:19:33.350 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76219' 00:19:33.350 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 76219 00:19:33.350 Received shutdown signal, test time was about 10.000000 seconds 00:19:33.350 00:19:33.350 Latency(us) 00:19:33.350 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:33.350 =================================================================================================================== 00:19:33.350 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:33.350 [2024-07-25 09:01:40.342600] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:33.350 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 76219 00:19:34.731 09:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # killprocess 76159 00:19:34.731 09:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 76159 ']' 00:19:34.731 09:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 76159 00:19:34.731 09:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:34.731 09:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:34.731 09:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76159 00:19:34.731 killing process with pid 76159 00:19:34.731 09:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:34.731 09:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:34.731 09:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76159' 00:19:34.731 09:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 76159 00:19:34.731 [2024-07-25 09:01:41.455064] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:34.731 09:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 76159 00:19:35.666 09:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:19:35.666 09:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:35.666 09:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:35.666 09:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:19:35.666 "subsystems": [ 00:19:35.666 { 00:19:35.666 "subsystem": "keyring", 00:19:35.666 "config": [] 00:19:35.666 }, 00:19:35.666 { 00:19:35.666 "subsystem": "iobuf", 00:19:35.666 "config": [ 00:19:35.666 { 00:19:35.666 "method": "iobuf_set_options", 00:19:35.666 "params": { 00:19:35.666 "small_pool_count": 8192, 00:19:35.666 "large_pool_count": 1024, 00:19:35.666 "small_bufsize": 8192, 00:19:35.666 "large_bufsize": 135168 00:19:35.666 } 00:19:35.666 } 00:19:35.666 ] 00:19:35.666 }, 00:19:35.666 { 00:19:35.666 "subsystem": "sock", 00:19:35.666 "config": [ 00:19:35.666 { 00:19:35.666 "method": "sock_set_default_impl", 00:19:35.666 "params": { 00:19:35.666 "impl_name": "uring" 00:19:35.666 } 00:19:35.666 }, 00:19:35.666 { 00:19:35.666 "method": "sock_impl_set_options", 00:19:35.666 "params": { 00:19:35.666 "impl_name": "ssl", 00:19:35.666 "recv_buf_size": 4096, 00:19:35.666 "send_buf_size": 4096, 00:19:35.666 "enable_recv_pipe": true, 00:19:35.666 "enable_quickack": false, 00:19:35.666 "enable_placement_id": 0, 00:19:35.666 "enable_zerocopy_send_server": true, 00:19:35.666 "enable_zerocopy_send_client": false, 00:19:35.666 "zerocopy_threshold": 0, 00:19:35.666 "tls_version": 0, 00:19:35.666 "enable_ktls": false 00:19:35.666 } 00:19:35.666 }, 00:19:35.666 { 00:19:35.666 "method": "sock_impl_set_options", 00:19:35.666 "params": { 00:19:35.666 "impl_name": "posix", 00:19:35.666 "recv_buf_size": 2097152, 00:19:35.666 "send_buf_size": 2097152, 00:19:35.666 "enable_recv_pipe": true, 00:19:35.666 "enable_quickack": false, 00:19:35.666 "enable_placement_id": 0, 00:19:35.666 "enable_zerocopy_send_server": true, 00:19:35.666 "enable_zerocopy_send_client": false, 00:19:35.666 "zerocopy_threshold": 0, 00:19:35.666 "tls_version": 0, 00:19:35.666 "enable_ktls": false 00:19:35.666 } 00:19:35.666 }, 00:19:35.666 { 00:19:35.666 "method": "sock_impl_set_options", 00:19:35.666 "params": { 00:19:35.666 "impl_name": "uring", 00:19:35.666 "recv_buf_size": 2097152, 00:19:35.666 "send_buf_size": 2097152, 00:19:35.666 "enable_recv_pipe": true, 00:19:35.666 "enable_quickack": false, 00:19:35.666 "enable_placement_id": 0, 00:19:35.666 "enable_zerocopy_send_server": false, 00:19:35.666 "enable_zerocopy_send_client": false, 00:19:35.666 "zerocopy_threshold": 0, 00:19:35.666 "tls_version": 0, 00:19:35.666 "enable_ktls": false 00:19:35.666 } 00:19:35.666 } 00:19:35.666 ] 00:19:35.666 }, 00:19:35.666 { 00:19:35.666 "subsystem": "vmd", 00:19:35.666 "config": [] 00:19:35.666 }, 00:19:35.666 { 00:19:35.666 "subsystem": "accel", 00:19:35.666 "config": [ 00:19:35.666 { 00:19:35.666 "method": "accel_set_options", 00:19:35.666 "params": { 00:19:35.666 "small_cache_size": 128, 00:19:35.666 "large_cache_size": 16, 00:19:35.666 "task_count": 2048, 00:19:35.666 "sequence_count": 2048, 00:19:35.666 "buf_count": 2048 00:19:35.666 } 00:19:35.666 } 00:19:35.666 ] 00:19:35.666 }, 00:19:35.666 { 00:19:35.666 "subsystem": "bdev", 00:19:35.666 "config": [ 00:19:35.666 { 00:19:35.666 "method": "bdev_set_options", 00:19:35.666 "params": { 00:19:35.666 "bdev_io_pool_size": 65535, 00:19:35.666 "bdev_io_cache_size": 256, 00:19:35.666 "bdev_auto_examine": true, 00:19:35.666 "iobuf_small_cache_size": 128, 00:19:35.666 "iobuf_large_cache_size": 16 00:19:35.666 } 00:19:35.666 }, 00:19:35.666 { 00:19:35.666 "method": "bdev_raid_set_options", 00:19:35.666 "params": { 00:19:35.666 "process_window_size_kb": 1024, 00:19:35.666 "process_max_bandwidth_mb_sec": 0 00:19:35.666 } 00:19:35.666 }, 00:19:35.666 { 00:19:35.666 "method": "bdev_iscsi_set_options", 00:19:35.666 "params": { 00:19:35.666 "timeout_sec": 30 00:19:35.666 } 00:19:35.666 }, 00:19:35.666 { 00:19:35.666 "method": "bdev_nvme_set_options", 00:19:35.666 "params": { 00:19:35.666 "action_on_timeout": "none", 00:19:35.666 "timeout_us": 0, 00:19:35.666 "timeout_admin_us": 0, 00:19:35.666 "keep_alive_timeout_ms": 10000, 00:19:35.666 "arbitration_burst": 0, 00:19:35.666 "low_priority_weight": 0, 00:19:35.666 "medium_priority_weight": 0, 00:19:35.666 "high_priority_weight": 0, 00:19:35.666 "nvme_adminq_poll_period_us": 10000, 00:19:35.666 "nvme_ioq_poll_period_us": 0, 00:19:35.666 "io_queue_requests": 0, 00:19:35.666 "delay_cmd_submit": true, 00:19:35.666 "transport_retry_count": 4, 00:19:35.666 "bdev_retry_count": 3, 00:19:35.666 "transport_ack_timeout": 0, 00:19:35.666 "ctrlr_loss_timeout_sec": 0, 00:19:35.666 "reconnect_delay_sec": 0, 00:19:35.666 "fast_io_fail_timeout_sec": 0, 00:19:35.666 "disable_auto_failback": false, 00:19:35.666 "generate_uuids": false, 00:19:35.666 "transport_tos": 0, 00:19:35.666 "nvme_error_stat": false, 00:19:35.666 "rdma_srq_size": 0, 00:19:35.666 "io_path_stat": false, 00:19:35.666 "allow_accel_sequence": false, 00:19:35.666 "rdma_max_cq_size": 0, 00:19:35.666 "rdma_cm_event_timeout_ms": 0, 00:19:35.666 "dhchap_digests": [ 00:19:35.666 "sha256", 00:19:35.666 "sha384", 00:19:35.666 "sha512" 00:19:35.666 ], 00:19:35.666 "dhchap_dhgroups": [ 00:19:35.666 "null", 00:19:35.666 "ffdhe2048", 00:19:35.666 "ffdhe3072", 00:19:35.666 "ffdhe4096", 00:19:35.666 "ffdhe6144", 00:19:35.666 "ffdhe8192" 00:19:35.666 ] 00:19:35.666 } 00:19:35.666 }, 00:19:35.666 { 00:19:35.666 "method": "bdev_nvme_set_hotplug", 00:19:35.666 "params": { 00:19:35.666 "period_us": 100000, 00:19:35.666 "enable": false 00:19:35.666 } 00:19:35.666 }, 00:19:35.666 { 00:19:35.666 "method": "bdev_malloc_create", 00:19:35.666 "params": { 00:19:35.666 "name": "malloc0", 00:19:35.666 "num_blocks": 8192, 00:19:35.666 "block_size": 4096, 00:19:35.666 "physical_block_size": 4096, 00:19:35.666 "uuid": "379564c6-0f15-459f-af7c-7ebe6370d839", 00:19:35.666 "optimal_io_boundary": 0, 00:19:35.666 "md_size": 0, 00:19:35.666 "dif_type": 0, 00:19:35.666 "dif_is_head_of_md": false, 00:19:35.666 "dif_pi_format": 0 00:19:35.666 } 00:19:35.666 }, 00:19:35.666 { 00:19:35.666 "method": "bdev_wait_for_examine" 00:19:35.666 } 00:19:35.666 ] 00:19:35.666 }, 00:19:35.666 { 00:19:35.666 "subsystem": "nbd", 00:19:35.666 "config": [] 00:19:35.666 }, 00:19:35.666 { 00:19:35.666 "subsystem": "scheduler", 00:19:35.666 "config": [ 00:19:35.666 { 00:19:35.667 "method": "framework_set_scheduler", 00:19:35.667 "params": { 00:19:35.667 "name": "static" 00:19:35.667 } 00:19:35.667 } 00:19:35.667 ] 00:19:35.667 }, 00:19:35.667 { 00:19:35.667 "subsystem": "nvmf", 00:19:35.667 "config": [ 00:19:35.667 { 00:19:35.667 "method": "nvmf_set_config", 00:19:35.667 "params": { 00:19:35.667 "discovery_filter": "match_any", 00:19:35.667 "admin_cmd_passthru": { 00:19:35.667 "identify_ctrlr": false 00:19:35.667 } 00:19:35.667 } 00:19:35.667 }, 00:19:35.667 { 00:19:35.667 "method": "nvmf_set_max_subsystems", 00:19:35.667 "params": { 00:19:35.667 "max_subsystems": 1024 00:19:35.667 } 00:19:35.667 }, 00:19:35.667 { 00:19:35.667 "method": "nvmf_set_crdt", 00:19:35.667 "params": { 00:19:35.667 "crdt1": 0, 00:19:35.667 "crdt2": 0, 00:19:35.667 "crdt3": 0 00:19:35.667 } 00:19:35.667 }, 00:19:35.667 { 00:19:35.667 "method": "nvmf_create_transport", 00:19:35.667 "params": { 00:19:35.667 "trtype": "TCP", 00:19:35.667 "max_queue_depth": 128, 00:19:35.667 "max_io_qpairs_per_ctrlr": 127, 00:19:35.667 "in_capsule_data_size": 4096, 00:19:35.667 "max_io_size": 131072, 00:19:35.667 "io_unit_size": 131072, 00:19:35.667 "max_aq_depth": 128, 00:19:35.667 "num_shared_buffers": 511, 00:19:35.667 "buf_cache_size": 4294967295, 00:19:35.667 "dif_insert_or_strip": false, 00:19:35.667 "zcopy": false, 00:19:35.667 "c2h_success": false, 00:19:35.667 "sock_priority": 0, 00:19:35.667 "abort_timeout_sec": 1, 00:19:35.667 "ack_timeout": 0, 00:19:35.667 "data_wr_pool_size": 0 00:19:35.667 } 00:19:35.667 }, 00:19:35.667 { 00:19:35.667 "method": "nvmf_create_subsystem", 00:19:35.667 "params": { 00:19:35.667 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:35.667 "allow_any_host": false, 00:19:35.667 "serial_number": "SPDK00000000000001", 00:19:35.667 "model_number": "SPDK bdev Controller", 00:19:35.667 "max_namespaces": 10, 00:19:35.667 "min_cntlid": 1, 00:19:35.667 "max_cntlid": 65519, 00:19:35.667 "ana_reporting": false 00:19:35.667 } 00:19:35.667 }, 00:19:35.667 { 00:19:35.667 "method": "nvmf_subsystem_add_host", 00:19:35.667 "params": { 00:19:35.667 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:35.667 "host": "nqn.2016-06.io.spdk:host1", 00:19:35.667 "psk": "/tmp/tmp.wYjyZPEOU0" 00:19:35.667 } 00:19:35.667 }, 00:19:35.667 { 00:19:35.667 "method": "nvmf_subsystem_add_ns", 00:19:35.667 "params": { 00:19:35.667 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:35.667 "namespace": { 00:19:35.667 "nsid": 1, 00:19:35.667 "bdev_name": "malloc0", 00:19:35.667 "nguid": "379564C60F15459FAF7C7EBE6370D839", 00:19:35.667 "uuid": "379564c6-0f15-459f-af7c-7ebe6370d839", 00:19:35.667 "no_auto_visible": false 00:19:35.667 } 00:19:35.667 } 00:19:35.667 }, 00:19:35.667 { 00:19:35.667 "method": "nvmf_subsystem_add_listener", 00:19:35.667 "params": { 00:19:35.667 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:35.667 "listen_address": { 00:19:35.667 "trtype": "TCP", 00:19:35.667 "adrfam": "IPv4", 00:19:35.667 "traddr": "10.0.0.2", 00:19:35.667 "trsvcid": "4420" 00:19:35.667 }, 00:19:35.667 "secure_channel": true 00:19:35.667 } 00:19:35.667 } 00:19:35.667 ] 00:19:35.667 } 00:19:35.667 ] 00:19:35.667 }' 00:19:35.667 09:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:35.667 09:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=76283 00:19:35.667 09:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:19:35.667 09:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 76283 00:19:35.667 09:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 76283 ']' 00:19:35.667 09:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:35.667 09:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:35.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:35.667 09:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:35.667 09:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:35.667 09:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:35.926 [2024-07-25 09:01:42.857393] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:19:35.926 [2024-07-25 09:01:42.857549] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:35.926 [2024-07-25 09:01:43.024244] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:36.185 [2024-07-25 09:01:43.262155] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:36.185 [2024-07-25 09:01:43.262223] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:36.185 [2024-07-25 09:01:43.262242] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:36.185 [2024-07-25 09:01:43.262259] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:36.185 [2024-07-25 09:01:43.262271] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:36.185 [2024-07-25 09:01:43.262419] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:36.753 [2024-07-25 09:01:43.582310] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:36.753 [2024-07-25 09:01:43.756239] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:36.753 [2024-07-25 09:01:43.779182] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:36.753 [2024-07-25 09:01:43.795122] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:36.753 [2024-07-25 09:01:43.795398] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:36.753 09:01:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:36.753 09:01:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:36.753 09:01:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:36.753 09:01:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:36.753 09:01:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:37.012 09:01:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:37.012 09:01:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=76315 00:19:37.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:37.012 09:01:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 76315 /var/tmp/bdevperf.sock 00:19:37.012 09:01:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 76315 ']' 00:19:37.012 09:01:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:37.012 09:01:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:37.012 09:01:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:19:37.012 09:01:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:37.012 09:01:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:37.012 09:01:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:37.012 09:01:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:19:37.012 "subsystems": [ 00:19:37.012 { 00:19:37.012 "subsystem": "keyring", 00:19:37.012 "config": [] 00:19:37.012 }, 00:19:37.012 { 00:19:37.012 "subsystem": "iobuf", 00:19:37.012 "config": [ 00:19:37.012 { 00:19:37.012 "method": "iobuf_set_options", 00:19:37.012 "params": { 00:19:37.012 "small_pool_count": 8192, 00:19:37.012 "large_pool_count": 1024, 00:19:37.012 "small_bufsize": 8192, 00:19:37.012 "large_bufsize": 135168 00:19:37.012 } 00:19:37.012 } 00:19:37.012 ] 00:19:37.012 }, 00:19:37.012 { 00:19:37.012 "subsystem": "sock", 00:19:37.012 "config": [ 00:19:37.012 { 00:19:37.012 "method": "sock_set_default_impl", 00:19:37.012 "params": { 00:19:37.012 "impl_name": "uring" 00:19:37.012 } 00:19:37.012 }, 00:19:37.012 { 00:19:37.012 "method": "sock_impl_set_options", 00:19:37.012 "params": { 00:19:37.012 "impl_name": "ssl", 00:19:37.012 "recv_buf_size": 4096, 00:19:37.012 "send_buf_size": 4096, 00:19:37.012 "enable_recv_pipe": true, 00:19:37.012 "enable_quickack": false, 00:19:37.012 "enable_placement_id": 0, 00:19:37.012 "enable_zerocopy_send_server": true, 00:19:37.012 "enable_zerocopy_send_client": false, 00:19:37.012 "zerocopy_threshold": 0, 00:19:37.012 "tls_version": 0, 00:19:37.012 "enable_ktls": false 00:19:37.012 } 00:19:37.012 }, 00:19:37.012 { 00:19:37.012 "method": "sock_impl_set_options", 00:19:37.012 "params": { 00:19:37.012 "impl_name": "posix", 00:19:37.012 "recv_buf_size": 2097152, 00:19:37.012 "send_buf_size": 2097152, 00:19:37.012 "enable_recv_pipe": true, 00:19:37.012 "enable_quickack": false, 00:19:37.012 "enable_placement_id": 0, 00:19:37.012 "enable_zerocopy_send_server": true, 00:19:37.012 "enable_zerocopy_send_client": false, 00:19:37.012 "zerocopy_threshold": 0, 00:19:37.012 "tls_version": 0, 00:19:37.012 "enable_ktls": false 00:19:37.012 } 00:19:37.012 }, 00:19:37.012 { 00:19:37.012 "method": "sock_impl_set_options", 00:19:37.012 "params": { 00:19:37.012 "impl_name": "uring", 00:19:37.012 "recv_buf_size": 2097152, 00:19:37.012 "send_buf_size": 2097152, 00:19:37.012 "enable_recv_pipe": true, 00:19:37.012 "enable_quickack": false, 00:19:37.012 "enable_placement_id": 0, 00:19:37.012 "enable_zerocopy_send_server": false, 00:19:37.012 "enable_zerocopy_send_client": false, 00:19:37.012 "zerocopy_threshold": 0, 00:19:37.012 "tls_version": 0, 00:19:37.012 "enable_ktls": false 00:19:37.012 } 00:19:37.012 } 00:19:37.012 ] 00:19:37.012 }, 00:19:37.012 { 00:19:37.012 "subsystem": "vmd", 00:19:37.012 "config": [] 00:19:37.012 }, 00:19:37.012 { 00:19:37.012 "subsystem": "accel", 00:19:37.012 "config": [ 00:19:37.012 { 00:19:37.012 "method": "accel_set_options", 00:19:37.012 "params": { 00:19:37.012 "small_cache_size": 128, 00:19:37.012 "large_cache_size": 16, 00:19:37.012 "task_count": 2048, 00:19:37.012 "sequence_count": 2048, 00:19:37.012 "buf_count": 2048 00:19:37.012 } 00:19:37.012 } 00:19:37.012 ] 00:19:37.012 }, 00:19:37.012 { 00:19:37.012 "subsystem": "bdev", 00:19:37.012 "config": [ 00:19:37.012 { 00:19:37.012 "method": "bdev_set_options", 00:19:37.012 "params": { 00:19:37.012 "bdev_io_pool_size": 65535, 00:19:37.012 "bdev_io_cache_size": 256, 00:19:37.012 "bdev_auto_examine": true, 00:19:37.012 "iobuf_small_cache_size": 128, 00:19:37.012 "iobuf_large_cache_size": 16 00:19:37.012 } 00:19:37.012 }, 00:19:37.012 { 00:19:37.012 "method": "bdev_raid_set_options", 00:19:37.012 "params": { 00:19:37.012 "process_window_size_kb": 1024, 00:19:37.012 "process_max_bandwidth_mb_sec": 0 00:19:37.012 } 00:19:37.012 }, 00:19:37.012 { 00:19:37.012 "method": "bdev_iscsi_set_options", 00:19:37.012 "params": { 00:19:37.012 "timeout_sec": 30 00:19:37.012 } 00:19:37.012 }, 00:19:37.012 { 00:19:37.012 "method": "bdev_nvme_set_options", 00:19:37.012 "params": { 00:19:37.012 "action_on_timeout": "none", 00:19:37.012 "timeout_us": 0, 00:19:37.012 "timeout_admin_us": 0, 00:19:37.012 "keep_alive_timeout_ms": 10000, 00:19:37.012 "arbitration_burst": 0, 00:19:37.012 "low_priority_weight": 0, 00:19:37.012 "medium_priority_weight": 0, 00:19:37.012 "high_priority_weight": 0, 00:19:37.012 "nvme_adminq_poll_period_us": 10000, 00:19:37.012 "nvme_ioq_poll_period_us": 0, 00:19:37.012 "io_queue_requests": 512, 00:19:37.012 "delay_cmd_submit": true, 00:19:37.012 "transport_retry_count": 4, 00:19:37.012 "bdev_retry_count": 3, 00:19:37.012 "transport_ack_timeout": 0, 00:19:37.012 "ctrlr_loss_timeout_sec": 0, 00:19:37.012 "reconnect_delay_sec": 0, 00:19:37.012 "fast_io_fail_timeout_sec": 0, 00:19:37.013 "disable_auto_failback": false, 00:19:37.013 "generate_uuids": false, 00:19:37.013 "transport_tos": 0, 00:19:37.013 "nvme_error_stat": false, 00:19:37.013 "rdma_srq_size": 0, 00:19:37.013 "io_path_stat": false, 00:19:37.013 "allow_accel_sequence": false, 00:19:37.013 "rdma_max_cq_size": 0, 00:19:37.013 "rdma_cm_event_timeout_ms": 0, 00:19:37.013 "dhchap_digests": [ 00:19:37.013 "sha256", 00:19:37.013 "sha384", 00:19:37.013 "sha512" 00:19:37.013 ], 00:19:37.013 "dhchap_dhgroups": [ 00:19:37.013 "null", 00:19:37.013 "ffdhe2048", 00:19:37.013 "ffdhe3072", 00:19:37.013 "ffdhe4096", 00:19:37.013 "ffdhe6144", 00:19:37.013 "ffdhe8192" 00:19:37.013 ] 00:19:37.013 } 00:19:37.013 }, 00:19:37.013 { 00:19:37.013 "method": "bdev_nvme_attach_controller", 00:19:37.013 "params": { 00:19:37.013 "name": "TLSTEST", 00:19:37.013 "trtype": "TCP", 00:19:37.013 "adrfam": "IPv4", 00:19:37.013 "traddr": "10.0.0.2", 00:19:37.013 "trsvcid": "4420", 00:19:37.013 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:37.013 "prchk_reftag": false, 00:19:37.013 "prchk_guard": false, 00:19:37.013 "ctrlr_loss_timeout_sec": 0, 00:19:37.013 "reconnect_delay_sec": 0, 00:19:37.013 "fast_io_fail_timeout_sec": 0, 00:19:37.013 "psk": "/tmp/tmp.wYjyZPEOU0", 00:19:37.013 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:37.013 "hdgst": false, 00:19:37.013 "ddgst": false 00:19:37.013 } 00:19:37.013 }, 00:19:37.013 { 00:19:37.013 "method": "bdev_nvme_set_hotplug", 00:19:37.013 "params": { 00:19:37.013 "period_us": 100000, 00:19:37.013 "enable": false 00:19:37.013 } 00:19:37.013 }, 00:19:37.013 { 00:19:37.013 "method": "bdev_wait_for_examine" 00:19:37.013 } 00:19:37.013 ] 00:19:37.013 }, 00:19:37.013 { 00:19:37.013 "subsystem": "nbd", 00:19:37.013 "config": [] 00:19:37.013 } 00:19:37.013 ] 00:19:37.013 }' 00:19:37.013 [2024-07-25 09:01:44.015178] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:19:37.013 [2024-07-25 09:01:44.016121] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76315 ] 00:19:37.271 [2024-07-25 09:01:44.192394] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:37.529 [2024-07-25 09:01:44.448458] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:37.788 [2024-07-25 09:01:44.761378] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:38.046 [2024-07-25 09:01:44.929654] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:38.046 [2024-07-25 09:01:44.929921] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:38.046 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:38.046 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:38.046 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:38.305 Running I/O for 10 seconds... 00:19:48.315 00:19:48.315 Latency(us) 00:19:48.315 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:48.315 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:48.315 Verification LBA range: start 0x0 length 0x2000 00:19:48.315 TLSTESTn1 : 10.02 2835.04 11.07 0.00 0.00 45056.43 8698.41 40036.54 00:19:48.315 =================================================================================================================== 00:19:48.315 Total : 2835.04 11.07 0.00 0.00 45056.43 8698.41 40036.54 00:19:48.315 0 00:19:48.315 09:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:48.315 09:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@214 -- # killprocess 76315 00:19:48.315 09:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 76315 ']' 00:19:48.315 09:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 76315 00:19:48.315 09:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:48.315 09:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:48.315 09:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76315 00:19:48.315 killing process with pid 76315 00:19:48.315 Received shutdown signal, test time was about 10.000000 seconds 00:19:48.315 00:19:48.315 Latency(us) 00:19:48.315 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:48.315 =================================================================================================================== 00:19:48.315 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:48.315 09:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:48.315 09:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:48.315 09:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76315' 00:19:48.315 09:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 76315 00:19:48.315 [2024-07-25 09:01:55.277080] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:48.315 09:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 76315 00:19:49.689 09:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # killprocess 76283 00:19:49.689 09:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 76283 ']' 00:19:49.689 09:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 76283 00:19:49.689 09:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:49.689 09:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:49.689 09:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76283 00:19:49.689 killing process with pid 76283 00:19:49.689 09:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:49.689 09:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:49.689 09:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76283' 00:19:49.689 09:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 76283 00:19:49.689 [2024-07-25 09:01:56.519031] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:49.689 09:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 76283 00:19:51.115 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:19:51.115 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:51.115 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:51.115 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:51.115 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=76479 00:19:51.115 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 76479 00:19:51.115 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:51.115 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 76479 ']' 00:19:51.115 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:51.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:51.115 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:51.115 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:51.115 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:51.115 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:51.115 [2024-07-25 09:01:57.933127] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:19:51.115 [2024-07-25 09:01:57.933302] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:51.115 [2024-07-25 09:01:58.099112] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:51.373 [2024-07-25 09:01:58.335037] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:51.373 [2024-07-25 09:01:58.335097] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:51.373 [2024-07-25 09:01:58.335114] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:51.373 [2024-07-25 09:01:58.335130] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:51.373 [2024-07-25 09:01:58.335142] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:51.373 [2024-07-25 09:01:58.335198] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:51.631 [2024-07-25 09:01:58.541686] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:51.890 09:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:51.890 09:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:51.890 09:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:51.890 09:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:51.890 09:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:51.890 09:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:51.890 09:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.wYjyZPEOU0 00:19:51.890 09:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.wYjyZPEOU0 00:19:51.890 09:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:52.146 [2024-07-25 09:01:59.117320] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:52.147 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:52.406 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:52.663 [2024-07-25 09:01:59.665611] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:52.663 [2024-07-25 09:01:59.665907] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:52.663 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:52.921 malloc0 00:19:52.921 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:53.179 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.wYjyZPEOU0 00:19:53.437 [2024-07-25 09:02:00.443147] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:53.437 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=76533 00:19:53.437 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:53.437 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:53.437 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 76533 /var/tmp/bdevperf.sock 00:19:53.437 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 76533 ']' 00:19:53.437 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:53.437 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:53.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:53.437 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:53.437 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:53.437 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:53.695 [2024-07-25 09:02:00.561223] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:19:53.695 [2024-07-25 09:02:00.561366] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76533 ] 00:19:53.695 [2024-07-25 09:02:00.726712] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.953 [2024-07-25 09:02:00.964894] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:54.212 [2024-07-25 09:02:01.167721] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:54.470 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:54.470 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:54.470 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.wYjyZPEOU0 00:19:54.728 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:54.985 [2024-07-25 09:02:01.983212] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:54.985 nvme0n1 00:19:54.985 09:02:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:55.243 Running I/O for 1 seconds... 00:19:56.227 00:19:56.227 Latency(us) 00:19:56.227 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:56.227 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:56.227 Verification LBA range: start 0x0 length 0x2000 00:19:56.227 nvme0n1 : 1.02 2856.06 11.16 0.00 0.00 44245.44 9651.67 34793.66 00:19:56.227 =================================================================================================================== 00:19:56.227 Total : 2856.06 11.16 0.00 0.00 44245.44 9651.67 34793.66 00:19:56.227 0 00:19:56.227 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # killprocess 76533 00:19:56.227 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 76533 ']' 00:19:56.227 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 76533 00:19:56.227 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:56.227 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:56.227 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76533 00:19:56.227 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:56.227 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:56.227 killing process with pid 76533 00:19:56.227 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76533' 00:19:56.227 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 76533 00:19:56.227 Received shutdown signal, test time was about 1.000000 seconds 00:19:56.227 00:19:56.227 Latency(us) 00:19:56.227 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:56.227 =================================================================================================================== 00:19:56.227 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:56.227 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 76533 00:19:57.599 09:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@235 -- # killprocess 76479 00:19:57.599 09:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 76479 ']' 00:19:57.599 09:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 76479 00:19:57.599 09:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:57.599 09:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:57.599 09:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76479 00:19:57.599 09:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:57.599 09:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:57.599 killing process with pid 76479 00:19:57.599 09:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76479' 00:19:57.599 09:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 76479 00:19:57.599 [2024-07-25 09:02:04.385931] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:57.599 09:02:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 76479 00:19:58.991 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:19:58.991 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:58.991 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:58.991 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:58.991 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=76619 00:19:58.991 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:58.991 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 76619 00:19:58.991 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 76619 ']' 00:19:58.991 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:58.991 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:58.991 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:58.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:58.991 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:58.991 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:58.991 [2024-07-25 09:02:05.757026] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:19:58.991 [2024-07-25 09:02:05.757177] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:58.991 [2024-07-25 09:02:05.923622] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:59.271 [2024-07-25 09:02:06.163370] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:59.271 [2024-07-25 09:02:06.163446] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:59.271 [2024-07-25 09:02:06.163468] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:59.271 [2024-07-25 09:02:06.163488] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:59.271 [2024-07-25 09:02:06.163503] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:59.271 [2024-07-25 09:02:06.163558] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:59.271 [2024-07-25 09:02:06.370098] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:59.836 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:59.836 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:59.836 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:59.836 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:59.836 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:59.836 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:59.836 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:19:59.836 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.836 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:59.836 [2024-07-25 09:02:06.770353] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:59.836 malloc0 00:19:59.836 [2024-07-25 09:02:06.847021] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:59.836 [2024-07-25 09:02:06.847318] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:59.836 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.836 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=76651 00:19:59.836 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@252 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:59.836 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 76651 /var/tmp/bdevperf.sock 00:19:59.836 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 76651 ']' 00:19:59.836 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:59.836 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:59.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:59.836 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:59.836 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:59.836 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:00.095 [2024-07-25 09:02:06.967273] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:00.095 [2024-07-25 09:02:06.967431] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76651 ] 00:20:00.095 [2024-07-25 09:02:07.129160] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:00.354 [2024-07-25 09:02:07.378766] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:00.612 [2024-07-25 09:02:07.579591] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:00.871 09:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:00.871 09:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:00.871 09:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.wYjyZPEOU0 00:20:01.128 09:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:01.386 [2024-07-25 09:02:08.335581] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:01.386 nvme0n1 00:20:01.386 09:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:01.644 Running I/O for 1 seconds... 00:20:02.579 00:20:02.579 Latency(us) 00:20:02.579 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:02.579 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:02.579 Verification LBA range: start 0x0 length 0x2000 00:20:02.579 nvme0n1 : 1.04 2340.44 9.14 0.00 0.00 53806.39 12571.00 34078.72 00:20:02.579 =================================================================================================================== 00:20:02.579 Total : 2340.44 9.14 0.00 0.00 53806.39 12571.00 34078.72 00:20:02.579 0 00:20:02.579 09:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:20:02.579 09:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.579 09:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:02.839 09:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.839 09:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:20:02.839 "subsystems": [ 00:20:02.839 { 00:20:02.839 "subsystem": "keyring", 00:20:02.839 "config": [ 00:20:02.839 { 00:20:02.839 "method": "keyring_file_add_key", 00:20:02.839 "params": { 00:20:02.839 "name": "key0", 00:20:02.839 "path": "/tmp/tmp.wYjyZPEOU0" 00:20:02.839 } 00:20:02.839 } 00:20:02.839 ] 00:20:02.839 }, 00:20:02.839 { 00:20:02.839 "subsystem": "iobuf", 00:20:02.839 "config": [ 00:20:02.839 { 00:20:02.839 "method": "iobuf_set_options", 00:20:02.839 "params": { 00:20:02.839 "small_pool_count": 8192, 00:20:02.839 "large_pool_count": 1024, 00:20:02.839 "small_bufsize": 8192, 00:20:02.839 "large_bufsize": 135168 00:20:02.839 } 00:20:02.839 } 00:20:02.839 ] 00:20:02.839 }, 00:20:02.839 { 00:20:02.839 "subsystem": "sock", 00:20:02.839 "config": [ 00:20:02.839 { 00:20:02.839 "method": "sock_set_default_impl", 00:20:02.839 "params": { 00:20:02.839 "impl_name": "uring" 00:20:02.839 } 00:20:02.839 }, 00:20:02.839 { 00:20:02.839 "method": "sock_impl_set_options", 00:20:02.839 "params": { 00:20:02.839 "impl_name": "ssl", 00:20:02.839 "recv_buf_size": 4096, 00:20:02.839 "send_buf_size": 4096, 00:20:02.839 "enable_recv_pipe": true, 00:20:02.839 "enable_quickack": false, 00:20:02.839 "enable_placement_id": 0, 00:20:02.839 "enable_zerocopy_send_server": true, 00:20:02.839 "enable_zerocopy_send_client": false, 00:20:02.839 "zerocopy_threshold": 0, 00:20:02.839 "tls_version": 0, 00:20:02.839 "enable_ktls": false 00:20:02.839 } 00:20:02.839 }, 00:20:02.839 { 00:20:02.839 "method": "sock_impl_set_options", 00:20:02.839 "params": { 00:20:02.839 "impl_name": "posix", 00:20:02.839 "recv_buf_size": 2097152, 00:20:02.839 "send_buf_size": 2097152, 00:20:02.839 "enable_recv_pipe": true, 00:20:02.839 "enable_quickack": false, 00:20:02.839 "enable_placement_id": 0, 00:20:02.839 "enable_zerocopy_send_server": true, 00:20:02.839 "enable_zerocopy_send_client": false, 00:20:02.839 "zerocopy_threshold": 0, 00:20:02.839 "tls_version": 0, 00:20:02.839 "enable_ktls": false 00:20:02.839 } 00:20:02.839 }, 00:20:02.839 { 00:20:02.839 "method": "sock_impl_set_options", 00:20:02.839 "params": { 00:20:02.839 "impl_name": "uring", 00:20:02.839 "recv_buf_size": 2097152, 00:20:02.839 "send_buf_size": 2097152, 00:20:02.839 "enable_recv_pipe": true, 00:20:02.839 "enable_quickack": false, 00:20:02.839 "enable_placement_id": 0, 00:20:02.839 "enable_zerocopy_send_server": false, 00:20:02.839 "enable_zerocopy_send_client": false, 00:20:02.839 "zerocopy_threshold": 0, 00:20:02.839 "tls_version": 0, 00:20:02.839 "enable_ktls": false 00:20:02.839 } 00:20:02.839 } 00:20:02.839 ] 00:20:02.839 }, 00:20:02.839 { 00:20:02.839 "subsystem": "vmd", 00:20:02.839 "config": [] 00:20:02.839 }, 00:20:02.839 { 00:20:02.839 "subsystem": "accel", 00:20:02.839 "config": [ 00:20:02.839 { 00:20:02.839 "method": "accel_set_options", 00:20:02.839 "params": { 00:20:02.839 "small_cache_size": 128, 00:20:02.839 "large_cache_size": 16, 00:20:02.839 "task_count": 2048, 00:20:02.839 "sequence_count": 2048, 00:20:02.839 "buf_count": 2048 00:20:02.839 } 00:20:02.839 } 00:20:02.839 ] 00:20:02.839 }, 00:20:02.839 { 00:20:02.839 "subsystem": "bdev", 00:20:02.839 "config": [ 00:20:02.839 { 00:20:02.839 "method": "bdev_set_options", 00:20:02.839 "params": { 00:20:02.839 "bdev_io_pool_size": 65535, 00:20:02.839 "bdev_io_cache_size": 256, 00:20:02.839 "bdev_auto_examine": true, 00:20:02.839 "iobuf_small_cache_size": 128, 00:20:02.839 "iobuf_large_cache_size": 16 00:20:02.839 } 00:20:02.839 }, 00:20:02.839 { 00:20:02.839 "method": "bdev_raid_set_options", 00:20:02.839 "params": { 00:20:02.839 "process_window_size_kb": 1024, 00:20:02.839 "process_max_bandwidth_mb_sec": 0 00:20:02.839 } 00:20:02.839 }, 00:20:02.839 { 00:20:02.839 "method": "bdev_iscsi_set_options", 00:20:02.839 "params": { 00:20:02.839 "timeout_sec": 30 00:20:02.839 } 00:20:02.839 }, 00:20:02.839 { 00:20:02.839 "method": "bdev_nvme_set_options", 00:20:02.839 "params": { 00:20:02.839 "action_on_timeout": "none", 00:20:02.839 "timeout_us": 0, 00:20:02.839 "timeout_admin_us": 0, 00:20:02.839 "keep_alive_timeout_ms": 10000, 00:20:02.839 "arbitration_burst": 0, 00:20:02.840 "low_priority_weight": 0, 00:20:02.840 "medium_priority_weight": 0, 00:20:02.840 "high_priority_weight": 0, 00:20:02.840 "nvme_adminq_poll_period_us": 10000, 00:20:02.840 "nvme_ioq_poll_period_us": 0, 00:20:02.840 "io_queue_requests": 0, 00:20:02.840 "delay_cmd_submit": true, 00:20:02.840 "transport_retry_count": 4, 00:20:02.840 "bdev_retry_count": 3, 00:20:02.840 "transport_ack_timeout": 0, 00:20:02.840 "ctrlr_loss_timeout_sec": 0, 00:20:02.840 "reconnect_delay_sec": 0, 00:20:02.840 "fast_io_fail_timeout_sec": 0, 00:20:02.840 "disable_auto_failback": false, 00:20:02.840 "generate_uuids": false, 00:20:02.840 "transport_tos": 0, 00:20:02.840 "nvme_error_stat": false, 00:20:02.840 "rdma_srq_size": 0, 00:20:02.840 "io_path_stat": false, 00:20:02.840 "allow_accel_sequence": false, 00:20:02.840 "rdma_max_cq_size": 0, 00:20:02.840 "rdma_cm_event_timeout_ms": 0, 00:20:02.840 "dhchap_digests": [ 00:20:02.840 "sha256", 00:20:02.840 "sha384", 00:20:02.840 "sha512" 00:20:02.840 ], 00:20:02.840 "dhchap_dhgroups": [ 00:20:02.840 "null", 00:20:02.840 "ffdhe2048", 00:20:02.840 "ffdhe3072", 00:20:02.840 "ffdhe4096", 00:20:02.840 "ffdhe6144", 00:20:02.840 "ffdhe8192" 00:20:02.840 ] 00:20:02.840 } 00:20:02.840 }, 00:20:02.840 { 00:20:02.840 "method": "bdev_nvme_set_hotplug", 00:20:02.840 "params": { 00:20:02.840 "period_us": 100000, 00:20:02.840 "enable": false 00:20:02.840 } 00:20:02.840 }, 00:20:02.840 { 00:20:02.840 "method": "bdev_malloc_create", 00:20:02.840 "params": { 00:20:02.840 "name": "malloc0", 00:20:02.840 "num_blocks": 8192, 00:20:02.840 "block_size": 4096, 00:20:02.840 "physical_block_size": 4096, 00:20:02.840 "uuid": "2327306d-e8c0-4151-96dd-bd9c8b9d11fc", 00:20:02.840 "optimal_io_boundary": 0, 00:20:02.840 "md_size": 0, 00:20:02.840 "dif_type": 0, 00:20:02.840 "dif_is_head_of_md": false, 00:20:02.840 "dif_pi_format": 0 00:20:02.840 } 00:20:02.840 }, 00:20:02.840 { 00:20:02.840 "method": "bdev_wait_for_examine" 00:20:02.840 } 00:20:02.840 ] 00:20:02.840 }, 00:20:02.840 { 00:20:02.840 "subsystem": "nbd", 00:20:02.840 "config": [] 00:20:02.840 }, 00:20:02.840 { 00:20:02.840 "subsystem": "scheduler", 00:20:02.840 "config": [ 00:20:02.840 { 00:20:02.840 "method": "framework_set_scheduler", 00:20:02.840 "params": { 00:20:02.840 "name": "static" 00:20:02.840 } 00:20:02.840 } 00:20:02.840 ] 00:20:02.840 }, 00:20:02.840 { 00:20:02.840 "subsystem": "nvmf", 00:20:02.840 "config": [ 00:20:02.840 { 00:20:02.840 "method": "nvmf_set_config", 00:20:02.840 "params": { 00:20:02.840 "discovery_filter": "match_any", 00:20:02.840 "admin_cmd_passthru": { 00:20:02.840 "identify_ctrlr": false 00:20:02.840 } 00:20:02.840 } 00:20:02.840 }, 00:20:02.840 { 00:20:02.840 "method": "nvmf_set_max_subsystems", 00:20:02.840 "params": { 00:20:02.840 "max_subsystems": 1024 00:20:02.840 } 00:20:02.840 }, 00:20:02.840 { 00:20:02.840 "method": "nvmf_set_crdt", 00:20:02.840 "params": { 00:20:02.840 "crdt1": 0, 00:20:02.840 "crdt2": 0, 00:20:02.840 "crdt3": 0 00:20:02.840 } 00:20:02.840 }, 00:20:02.840 { 00:20:02.840 "method": "nvmf_create_transport", 00:20:02.840 "params": { 00:20:02.840 "trtype": "TCP", 00:20:02.840 "max_queue_depth": 128, 00:20:02.840 "max_io_qpairs_per_ctrlr": 127, 00:20:02.840 "in_capsule_data_size": 4096, 00:20:02.840 "max_io_size": 131072, 00:20:02.840 "io_unit_size": 131072, 00:20:02.840 "max_aq_depth": 128, 00:20:02.840 "num_shared_buffers": 511, 00:20:02.840 "buf_cache_size": 4294967295, 00:20:02.840 "dif_insert_or_strip": false, 00:20:02.840 "zcopy": false, 00:20:02.840 "c2h_success": false, 00:20:02.840 "sock_priority": 0, 00:20:02.840 "abort_timeout_sec": 1, 00:20:02.840 "ack_timeout": 0, 00:20:02.840 "data_wr_pool_size": 0 00:20:02.840 } 00:20:02.840 }, 00:20:02.840 { 00:20:02.840 "method": "nvmf_create_subsystem", 00:20:02.840 "params": { 00:20:02.840 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:02.840 "allow_any_host": false, 00:20:02.840 "serial_number": "00000000000000000000", 00:20:02.840 "model_number": "SPDK bdev Controller", 00:20:02.840 "max_namespaces": 32, 00:20:02.840 "min_cntlid": 1, 00:20:02.840 "max_cntlid": 65519, 00:20:02.840 "ana_reporting": false 00:20:02.840 } 00:20:02.840 }, 00:20:02.840 { 00:20:02.840 "method": "nvmf_subsystem_add_host", 00:20:02.840 "params": { 00:20:02.840 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:02.840 "host": "nqn.2016-06.io.spdk:host1", 00:20:02.840 "psk": "key0" 00:20:02.840 } 00:20:02.840 }, 00:20:02.840 { 00:20:02.840 "method": "nvmf_subsystem_add_ns", 00:20:02.840 "params": { 00:20:02.840 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:02.840 "namespace": { 00:20:02.840 "nsid": 1, 00:20:02.840 "bdev_name": "malloc0", 00:20:02.840 "nguid": "2327306DE8C0415196DDBD9C8B9D11FC", 00:20:02.840 "uuid": "2327306d-e8c0-4151-96dd-bd9c8b9d11fc", 00:20:02.840 "no_auto_visible": false 00:20:02.840 } 00:20:02.840 } 00:20:02.840 }, 00:20:02.840 { 00:20:02.840 "method": "nvmf_subsystem_add_listener", 00:20:02.840 "params": { 00:20:02.840 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:02.840 "listen_address": { 00:20:02.840 "trtype": "TCP", 00:20:02.840 "adrfam": "IPv4", 00:20:02.840 "traddr": "10.0.0.2", 00:20:02.840 "trsvcid": "4420" 00:20:02.840 }, 00:20:02.840 "secure_channel": false, 00:20:02.840 "sock_impl": "ssl" 00:20:02.840 } 00:20:02.840 } 00:20:02.840 ] 00:20:02.840 } 00:20:02.840 ] 00:20:02.840 }' 00:20:02.840 09:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:03.115 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:20:03.115 "subsystems": [ 00:20:03.115 { 00:20:03.115 "subsystem": "keyring", 00:20:03.115 "config": [ 00:20:03.115 { 00:20:03.115 "method": "keyring_file_add_key", 00:20:03.115 "params": { 00:20:03.115 "name": "key0", 00:20:03.115 "path": "/tmp/tmp.wYjyZPEOU0" 00:20:03.115 } 00:20:03.115 } 00:20:03.115 ] 00:20:03.115 }, 00:20:03.115 { 00:20:03.115 "subsystem": "iobuf", 00:20:03.115 "config": [ 00:20:03.115 { 00:20:03.115 "method": "iobuf_set_options", 00:20:03.115 "params": { 00:20:03.115 "small_pool_count": 8192, 00:20:03.115 "large_pool_count": 1024, 00:20:03.115 "small_bufsize": 8192, 00:20:03.115 "large_bufsize": 135168 00:20:03.115 } 00:20:03.115 } 00:20:03.115 ] 00:20:03.115 }, 00:20:03.115 { 00:20:03.115 "subsystem": "sock", 00:20:03.115 "config": [ 00:20:03.115 { 00:20:03.115 "method": "sock_set_default_impl", 00:20:03.115 "params": { 00:20:03.115 "impl_name": "uring" 00:20:03.115 } 00:20:03.115 }, 00:20:03.115 { 00:20:03.115 "method": "sock_impl_set_options", 00:20:03.115 "params": { 00:20:03.115 "impl_name": "ssl", 00:20:03.115 "recv_buf_size": 4096, 00:20:03.115 "send_buf_size": 4096, 00:20:03.115 "enable_recv_pipe": true, 00:20:03.115 "enable_quickack": false, 00:20:03.115 "enable_placement_id": 0, 00:20:03.115 "enable_zerocopy_send_server": true, 00:20:03.115 "enable_zerocopy_send_client": false, 00:20:03.115 "zerocopy_threshold": 0, 00:20:03.115 "tls_version": 0, 00:20:03.115 "enable_ktls": false 00:20:03.115 } 00:20:03.115 }, 00:20:03.115 { 00:20:03.115 "method": "sock_impl_set_options", 00:20:03.115 "params": { 00:20:03.115 "impl_name": "posix", 00:20:03.115 "recv_buf_size": 2097152, 00:20:03.115 "send_buf_size": 2097152, 00:20:03.115 "enable_recv_pipe": true, 00:20:03.115 "enable_quickack": false, 00:20:03.115 "enable_placement_id": 0, 00:20:03.115 "enable_zerocopy_send_server": true, 00:20:03.115 "enable_zerocopy_send_client": false, 00:20:03.115 "zerocopy_threshold": 0, 00:20:03.115 "tls_version": 0, 00:20:03.115 "enable_ktls": false 00:20:03.115 } 00:20:03.115 }, 00:20:03.115 { 00:20:03.115 "method": "sock_impl_set_options", 00:20:03.115 "params": { 00:20:03.115 "impl_name": "uring", 00:20:03.115 "recv_buf_size": 2097152, 00:20:03.115 "send_buf_size": 2097152, 00:20:03.115 "enable_recv_pipe": true, 00:20:03.115 "enable_quickack": false, 00:20:03.115 "enable_placement_id": 0, 00:20:03.115 "enable_zerocopy_send_server": false, 00:20:03.115 "enable_zerocopy_send_client": false, 00:20:03.115 "zerocopy_threshold": 0, 00:20:03.115 "tls_version": 0, 00:20:03.115 "enable_ktls": false 00:20:03.115 } 00:20:03.115 } 00:20:03.115 ] 00:20:03.115 }, 00:20:03.115 { 00:20:03.115 "subsystem": "vmd", 00:20:03.115 "config": [] 00:20:03.115 }, 00:20:03.115 { 00:20:03.115 "subsystem": "accel", 00:20:03.115 "config": [ 00:20:03.115 { 00:20:03.115 "method": "accel_set_options", 00:20:03.115 "params": { 00:20:03.115 "small_cache_size": 128, 00:20:03.115 "large_cache_size": 16, 00:20:03.115 "task_count": 2048, 00:20:03.115 "sequence_count": 2048, 00:20:03.115 "buf_count": 2048 00:20:03.115 } 00:20:03.115 } 00:20:03.115 ] 00:20:03.115 }, 00:20:03.115 { 00:20:03.115 "subsystem": "bdev", 00:20:03.115 "config": [ 00:20:03.115 { 00:20:03.115 "method": "bdev_set_options", 00:20:03.116 "params": { 00:20:03.116 "bdev_io_pool_size": 65535, 00:20:03.116 "bdev_io_cache_size": 256, 00:20:03.116 "bdev_auto_examine": true, 00:20:03.116 "iobuf_small_cache_size": 128, 00:20:03.116 "iobuf_large_cache_size": 16 00:20:03.116 } 00:20:03.116 }, 00:20:03.116 { 00:20:03.116 "method": "bdev_raid_set_options", 00:20:03.116 "params": { 00:20:03.116 "process_window_size_kb": 1024, 00:20:03.116 "process_max_bandwidth_mb_sec": 0 00:20:03.116 } 00:20:03.116 }, 00:20:03.116 { 00:20:03.116 "method": "bdev_iscsi_set_options", 00:20:03.116 "params": { 00:20:03.116 "timeout_sec": 30 00:20:03.116 } 00:20:03.116 }, 00:20:03.116 { 00:20:03.116 "method": "bdev_nvme_set_options", 00:20:03.116 "params": { 00:20:03.116 "action_on_timeout": "none", 00:20:03.116 "timeout_us": 0, 00:20:03.116 "timeout_admin_us": 0, 00:20:03.116 "keep_alive_timeout_ms": 10000, 00:20:03.116 "arbitration_burst": 0, 00:20:03.116 "low_priority_weight": 0, 00:20:03.116 "medium_priority_weight": 0, 00:20:03.116 "high_priority_weight": 0, 00:20:03.116 "nvme_adminq_poll_period_us": 10000, 00:20:03.116 "nvme_ioq_poll_period_us": 0, 00:20:03.116 "io_queue_requests": 512, 00:20:03.116 "delay_cmd_submit": true, 00:20:03.116 "transport_retry_count": 4, 00:20:03.116 "bdev_retry_count": 3, 00:20:03.116 "transport_ack_timeout": 0, 00:20:03.116 "ctrlr_loss_timeout_sec": 0, 00:20:03.116 "reconnect_delay_sec": 0, 00:20:03.116 "fast_io_fail_timeout_sec": 0, 00:20:03.116 "disable_auto_failback": false, 00:20:03.116 "generate_uuids": false, 00:20:03.116 "transport_tos": 0, 00:20:03.116 "nvme_error_stat": false, 00:20:03.116 "rdma_srq_size": 0, 00:20:03.116 "io_path_stat": false, 00:20:03.116 "allow_accel_sequence": false, 00:20:03.116 "rdma_max_cq_size": 0, 00:20:03.116 "rdma_cm_event_timeout_ms": 0, 00:20:03.116 "dhchap_digests": [ 00:20:03.116 "sha256", 00:20:03.116 "sha384", 00:20:03.116 "sha512" 00:20:03.116 ], 00:20:03.116 "dhchap_dhgroups": [ 00:20:03.116 "null", 00:20:03.116 "ffdhe2048", 00:20:03.116 "ffdhe3072", 00:20:03.116 "ffdhe4096", 00:20:03.116 "ffdhe6144", 00:20:03.116 "ffdhe8192" 00:20:03.116 ] 00:20:03.116 } 00:20:03.116 }, 00:20:03.116 { 00:20:03.116 "method": "bdev_nvme_attach_controller", 00:20:03.116 "params": { 00:20:03.116 "name": "nvme0", 00:20:03.116 "trtype": "TCP", 00:20:03.116 "adrfam": "IPv4", 00:20:03.116 "traddr": "10.0.0.2", 00:20:03.116 "trsvcid": "4420", 00:20:03.116 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:03.116 "prchk_reftag": false, 00:20:03.116 "prchk_guard": false, 00:20:03.116 "ctrlr_loss_timeout_sec": 0, 00:20:03.116 "reconnect_delay_sec": 0, 00:20:03.116 "fast_io_fail_timeout_sec": 0, 00:20:03.116 "psk": "key0", 00:20:03.116 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:03.116 "hdgst": false, 00:20:03.116 "ddgst": false 00:20:03.116 } 00:20:03.116 }, 00:20:03.116 { 00:20:03.116 "method": "bdev_nvme_set_hotplug", 00:20:03.116 "params": { 00:20:03.116 "period_us": 100000, 00:20:03.116 "enable": false 00:20:03.116 } 00:20:03.116 }, 00:20:03.116 { 00:20:03.116 "method": "bdev_enable_histogram", 00:20:03.116 "params": { 00:20:03.116 "name": "nvme0n1", 00:20:03.116 "enable": true 00:20:03.116 } 00:20:03.116 }, 00:20:03.116 { 00:20:03.116 "method": "bdev_wait_for_examine" 00:20:03.116 } 00:20:03.116 ] 00:20:03.116 }, 00:20:03.116 { 00:20:03.116 "subsystem": "nbd", 00:20:03.116 "config": [] 00:20:03.116 } 00:20:03.116 ] 00:20:03.116 }' 00:20:03.116 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # killprocess 76651 00:20:03.116 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 76651 ']' 00:20:03.116 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 76651 00:20:03.116 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:03.116 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:03.116 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76651 00:20:03.116 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:03.116 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:03.116 killing process with pid 76651 00:20:03.116 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76651' 00:20:03.116 Received shutdown signal, test time was about 1.000000 seconds 00:20:03.116 00:20:03.116 Latency(us) 00:20:03.116 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:03.116 =================================================================================================================== 00:20:03.116 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:03.116 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 76651 00:20:03.116 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 76651 00:20:04.519 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # killprocess 76619 00:20:04.519 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 76619 ']' 00:20:04.519 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 76619 00:20:04.519 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:04.519 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:04.519 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76619 00:20:04.519 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:04.519 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:04.519 killing process with pid 76619 00:20:04.519 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76619' 00:20:04.519 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 76619 00:20:04.519 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 76619 00:20:05.897 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:20:05.897 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:05.897 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:05.897 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:20:05.897 "subsystems": [ 00:20:05.897 { 00:20:05.897 "subsystem": "keyring", 00:20:05.897 "config": [ 00:20:05.897 { 00:20:05.897 "method": "keyring_file_add_key", 00:20:05.897 "params": { 00:20:05.897 "name": "key0", 00:20:05.897 "path": "/tmp/tmp.wYjyZPEOU0" 00:20:05.897 } 00:20:05.897 } 00:20:05.897 ] 00:20:05.897 }, 00:20:05.897 { 00:20:05.897 "subsystem": "iobuf", 00:20:05.897 "config": [ 00:20:05.897 { 00:20:05.897 "method": "iobuf_set_options", 00:20:05.897 "params": { 00:20:05.897 "small_pool_count": 8192, 00:20:05.897 "large_pool_count": 1024, 00:20:05.897 "small_bufsize": 8192, 00:20:05.897 "large_bufsize": 135168 00:20:05.897 } 00:20:05.897 } 00:20:05.897 ] 00:20:05.897 }, 00:20:05.897 { 00:20:05.897 "subsystem": "sock", 00:20:05.897 "config": [ 00:20:05.897 { 00:20:05.897 "method": "sock_set_default_impl", 00:20:05.897 "params": { 00:20:05.897 "impl_name": "uring" 00:20:05.897 } 00:20:05.897 }, 00:20:05.897 { 00:20:05.897 "method": "sock_impl_set_options", 00:20:05.897 "params": { 00:20:05.897 "impl_name": "ssl", 00:20:05.897 "recv_buf_size": 4096, 00:20:05.897 "send_buf_size": 4096, 00:20:05.897 "enable_recv_pipe": true, 00:20:05.897 "enable_quickack": false, 00:20:05.897 "enable_placement_id": 0, 00:20:05.897 "enable_zerocopy_send_server": true, 00:20:05.897 "enable_zerocopy_send_client": false, 00:20:05.897 "zerocopy_threshold": 0, 00:20:05.897 "tls_version": 0, 00:20:05.897 "enable_ktls": false 00:20:05.897 } 00:20:05.897 }, 00:20:05.897 { 00:20:05.898 "method": "sock_impl_set_options", 00:20:05.898 "params": { 00:20:05.898 "impl_name": "posix", 00:20:05.898 "recv_buf_size": 2097152, 00:20:05.898 "send_buf_size": 2097152, 00:20:05.898 "enable_recv_pipe": true, 00:20:05.898 "enable_quickack": false, 00:20:05.898 "enable_placement_id": 0, 00:20:05.898 "enable_zerocopy_send_server": true, 00:20:05.898 "enable_zerocopy_send_client": false, 00:20:05.898 "zerocopy_threshold": 0, 00:20:05.898 "tls_version": 0, 00:20:05.898 "enable_ktls": false 00:20:05.898 } 00:20:05.898 }, 00:20:05.898 { 00:20:05.898 "method": "sock_impl_set_options", 00:20:05.898 "params": { 00:20:05.898 "impl_name": "uring", 00:20:05.898 "recv_buf_size": 2097152, 00:20:05.898 "send_buf_size": 2097152, 00:20:05.898 "enable_recv_pipe": true, 00:20:05.898 "enable_quickack": false, 00:20:05.898 "enable_placement_id": 0, 00:20:05.898 "enable_zerocopy_send_server": false, 00:20:05.898 "enable_zerocopy_send_client": false, 00:20:05.898 "zerocopy_threshold": 0, 00:20:05.898 "tls_version": 0, 00:20:05.898 "enable_ktls": false 00:20:05.898 } 00:20:05.898 } 00:20:05.898 ] 00:20:05.898 }, 00:20:05.898 { 00:20:05.898 "subsystem": "vmd", 00:20:05.898 "config": [] 00:20:05.898 }, 00:20:05.898 { 00:20:05.898 "subsystem": "accel", 00:20:05.898 "config": [ 00:20:05.898 { 00:20:05.898 "method": "accel_set_options", 00:20:05.898 "params": { 00:20:05.898 "small_cache_size": 128, 00:20:05.898 "large_cache_size": 16, 00:20:05.898 "task_count": 2048, 00:20:05.898 "sequence_count": 2048, 00:20:05.898 "buf_count": 2048 00:20:05.898 } 00:20:05.898 } 00:20:05.898 ] 00:20:05.898 }, 00:20:05.898 { 00:20:05.898 "subsystem": "bdev", 00:20:05.898 "config": [ 00:20:05.898 { 00:20:05.898 "method": "bdev_set_options", 00:20:05.898 "params": { 00:20:05.898 "bdev_io_pool_size": 65535, 00:20:05.898 "bdev_io_cache_size": 256, 00:20:05.898 "bdev_auto_examine": true, 00:20:05.898 "iobuf_small_cache_size": 128, 00:20:05.898 "iobuf_large_cache_size": 16 00:20:05.898 } 00:20:05.898 }, 00:20:05.898 { 00:20:05.898 "method": "bdev_raid_set_options", 00:20:05.898 "params": { 00:20:05.898 "process_window_size_kb": 1024, 00:20:05.898 "process_max_bandwidth_mb_sec": 0 00:20:05.898 } 00:20:05.898 }, 00:20:05.898 { 00:20:05.898 "method": "bdev_iscsi_set_options", 00:20:05.898 "params": { 00:20:05.898 "timeout_sec": 30 00:20:05.898 } 00:20:05.898 }, 00:20:05.898 { 00:20:05.898 "method": "bdev_nvme_set_options", 00:20:05.898 "params": { 00:20:05.898 "action_on_timeout": "none", 00:20:05.898 "timeout_us": 0, 00:20:05.898 "timeout_admin_us": 0, 00:20:05.898 "keep_alive_timeout_ms": 10000, 00:20:05.898 "arbitration_burst": 0, 00:20:05.898 "low_priority_weight": 0, 00:20:05.898 "medium_priority_weight": 0, 00:20:05.898 "high_priority_weight": 0, 00:20:05.898 "nvme_adminq_poll_period_us": 10000, 00:20:05.898 "nvme_ioq_poll_period_us": 0, 00:20:05.898 "io_queue_requests": 0, 00:20:05.898 "delay_cmd_submit": true, 00:20:05.898 "transport_retry_count": 4, 00:20:05.898 "bdev_retry_count": 3, 00:20:05.898 "transport_ack_timeout": 0, 00:20:05.898 "ctrlr_loss_timeout_sec": 0, 00:20:05.898 "reconnect_delay_sec": 0, 00:20:05.898 "fast_io_fail_timeout_sec": 0, 00:20:05.898 "disable_auto_failback": false, 00:20:05.898 "generate_uuids": false, 00:20:05.898 "transport_tos": 0, 00:20:05.898 "nvme_error_stat": false, 00:20:05.898 "rdma_srq_size": 0, 00:20:05.898 "io_path_stat": false, 00:20:05.898 "allow_accel_sequence": false, 00:20:05.898 "rdma_max_cq_size": 0, 00:20:05.898 "rdma_cm_event_timeout_ms": 0, 00:20:05.898 "dhchap_digests": [ 00:20:05.898 "sha256", 00:20:05.898 "sha384", 00:20:05.898 "sha512" 00:20:05.898 ], 00:20:05.898 "dhchap_dhgroups": [ 00:20:05.898 "null", 00:20:05.898 "ffdhe2048", 00:20:05.898 "ffdhe3072", 00:20:05.898 "ffdhe4096", 00:20:05.898 "ffdhe6144", 00:20:05.898 "ffdhe8192" 00:20:05.898 ] 00:20:05.898 } 00:20:05.898 }, 00:20:05.898 { 00:20:05.898 "method": "bdev_nvme_set_hotplug", 00:20:05.898 "params": { 00:20:05.898 "period_us": 100000, 00:20:05.898 "enable": false 00:20:05.898 } 00:20:05.898 }, 00:20:05.898 { 00:20:05.898 "method": "bdev_malloc_create", 00:20:05.898 "params": { 00:20:05.898 "name": "malloc0", 00:20:05.898 "num_blocks": 8192, 00:20:05.898 "block_size": 4096, 00:20:05.898 "physical_block_size": 4096, 00:20:05.898 "uuid": "2327306d-e8c0-4151-96dd-bd9c8b9d11fc", 00:20:05.898 "optimal_io_boundary": 0, 00:20:05.898 "md_size": 0, 00:20:05.898 "dif_type": 0, 00:20:05.898 "dif_is_head_of_md": false, 00:20:05.898 "dif_pi_format": 0 00:20:05.898 } 00:20:05.898 }, 00:20:05.898 { 00:20:05.898 "method": "bdev_wait_for_examine" 00:20:05.898 } 00:20:05.898 ] 00:20:05.898 }, 00:20:05.898 { 00:20:05.898 "subsystem": "nbd", 00:20:05.898 "config": [] 00:20:05.898 }, 00:20:05.898 { 00:20:05.898 "subsystem": "scheduler", 00:20:05.898 "config": [ 00:20:05.898 { 00:20:05.898 "method": "framework_set_scheduler", 00:20:05.898 "params": { 00:20:05.898 "name": "static" 00:20:05.898 } 00:20:05.898 } 00:20:05.898 ] 00:20:05.898 }, 00:20:05.898 { 00:20:05.898 "subsystem": "nvmf", 00:20:05.898 "config": [ 00:20:05.898 { 00:20:05.898 "method": "nvmf_set_config", 00:20:05.898 "params": { 00:20:05.898 "discovery_filter": "match_any", 00:20:05.898 "admin_cmd_passthru": { 00:20:05.898 "identify_ctrlr": false 00:20:05.898 } 00:20:05.898 } 00:20:05.898 }, 00:20:05.898 { 00:20:05.898 "method": "nvmf_set_max_subsystems", 00:20:05.898 "params": { 00:20:05.898 "max_subsystems": 1024 00:20:05.898 } 00:20:05.898 }, 00:20:05.898 { 00:20:05.898 "method": "nvmf_set_crdt", 00:20:05.898 "params": { 00:20:05.898 "crdt1": 0, 00:20:05.898 "crdt2": 0, 00:20:05.898 "crdt3": 0 00:20:05.898 } 00:20:05.898 }, 00:20:05.898 { 00:20:05.898 "method": "nvmf_create_transport", 00:20:05.898 "params": { 00:20:05.898 "trtype": "TCP", 00:20:05.898 "max_queue_depth": 128, 00:20:05.898 "max_io_qpairs_per_ctrlr": 127, 00:20:05.898 "in_capsule_data_size": 4096, 00:20:05.898 "max_io_size": 131072, 00:20:05.898 "io_unit_size": 131072, 00:20:05.898 "max_aq_depth": 128, 00:20:05.898 "num_shared_buffers": 511, 00:20:05.898 "buf_cache_size": 4294967295, 00:20:05.898 "dif_insert_or_strip": false, 00:20:05.898 "zcopy": false, 00:20:05.898 "c2h_success": false, 00:20:05.898 "sock_priority": 0, 00:20:05.898 "abort_timeout_sec": 1, 00:20:05.898 "ack_timeout": 0, 00:20:05.898 "data_wr_pool_size": 0 00:20:05.898 } 00:20:05.898 }, 00:20:05.898 { 00:20:05.898 "method": "nvmf_create_subsystem", 00:20:05.898 "params": { 00:20:05.898 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:05.898 "allow_any_host": false, 00:20:05.898 "serial_number": "00000000000000000000", 00:20:05.898 "model_number": "SPDK bdev Controller", 00:20:05.898 "max_namespaces": 32, 00:20:05.898 "min_cntlid": 1, 00:20:05.898 "max_cntlid": 65519, 00:20:05.898 "ana_reporting": false 00:20:05.898 } 00:20:05.898 }, 00:20:05.898 { 00:20:05.898 "method": "nvmf_subsystem_add_host", 00:20:05.898 "params": { 00:20:05.898 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:05.898 "host": "nqn.2016-06.io.spdk:host1", 00:20:05.898 "psk": "key0" 00:20:05.898 } 00:20:05.898 }, 00:20:05.898 { 00:20:05.898 "method": "nvmf_subsystem_add_ns", 00:20:05.898 "params": { 00:20:05.898 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:05.898 "namespace": { 00:20:05.898 "nsid": 1, 00:20:05.898 "bdev_name": "malloc0", 00:20:05.898 "nguid": "2327306DE8C0415196DDBD9C8B9D11FC", 00:20:05.898 "uuid": "2327306d-e8c0-4151-96dd-bd9c8b9d11fc", 00:20:05.898 "no_auto_visible": false 00:20:05.898 } 00:20:05.898 } 00:20:05.898 }, 00:20:05.898 { 00:20:05.898 "method": "nvmf_subsystem_add_listener", 00:20:05.898 "params": { 00:20:05.899 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:05.899 "listen_address": { 00:20:05.899 "trtype": "TCP", 00:20:05.899 "adrfam": "IPv4", 00:20:05.899 "traddr": "10.0.0.2", 00:20:05.899 "trsvcid": "4420" 00:20:05.899 }, 00:20:05.899 "secure_channel": false, 00:20:05.899 "sock_impl": "ssl" 00:20:05.899 } 00:20:05.899 } 00:20:05.899 ] 00:20:05.899 } 00:20:05.899 ] 00:20:05.899 }' 00:20:05.899 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:05.899 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=76732 00:20:05.899 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:20:05.899 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 76732 00:20:05.899 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 76732 ']' 00:20:05.899 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:05.899 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:05.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:05.899 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:05.899 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:05.899 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:05.899 [2024-07-25 09:02:12.722937] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:05.899 [2024-07-25 09:02:12.723107] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:05.899 [2024-07-25 09:02:12.901482] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.158 [2024-07-25 09:02:13.207272] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:06.158 [2024-07-25 09:02:13.207334] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:06.158 [2024-07-25 09:02:13.207352] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:06.158 [2024-07-25 09:02:13.207372] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:06.158 [2024-07-25 09:02:13.207384] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:06.158 [2024-07-25 09:02:13.207526] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:06.415 [2024-07-25 09:02:13.525548] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:06.673 [2024-07-25 09:02:13.703342] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:06.673 [2024-07-25 09:02:13.742257] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:06.673 [2024-07-25 09:02:13.742534] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:06.673 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:06.673 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:06.673 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:06.673 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:06.673 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:06.931 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:06.931 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=76770 00:20:06.931 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 76770 /var/tmp/bdevperf.sock 00:20:06.932 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 76770 ']' 00:20:06.932 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:20:06.932 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:06.932 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:06.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:06.932 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:06.932 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:20:06.932 "subsystems": [ 00:20:06.932 { 00:20:06.932 "subsystem": "keyring", 00:20:06.932 "config": [ 00:20:06.932 { 00:20:06.932 "method": "keyring_file_add_key", 00:20:06.932 "params": { 00:20:06.932 "name": "key0", 00:20:06.932 "path": "/tmp/tmp.wYjyZPEOU0" 00:20:06.932 } 00:20:06.932 } 00:20:06.932 ] 00:20:06.932 }, 00:20:06.932 { 00:20:06.932 "subsystem": "iobuf", 00:20:06.932 "config": [ 00:20:06.932 { 00:20:06.932 "method": "iobuf_set_options", 00:20:06.932 "params": { 00:20:06.932 "small_pool_count": 8192, 00:20:06.932 "large_pool_count": 1024, 00:20:06.932 "small_bufsize": 8192, 00:20:06.932 "large_bufsize": 135168 00:20:06.932 } 00:20:06.932 } 00:20:06.932 ] 00:20:06.932 }, 00:20:06.932 { 00:20:06.932 "subsystem": "sock", 00:20:06.932 "config": [ 00:20:06.932 { 00:20:06.932 "method": "sock_set_default_impl", 00:20:06.932 "params": { 00:20:06.932 "impl_name": "uring" 00:20:06.932 } 00:20:06.932 }, 00:20:06.932 { 00:20:06.932 "method": "sock_impl_set_options", 00:20:06.932 "params": { 00:20:06.932 "impl_name": "ssl", 00:20:06.932 "recv_buf_size": 4096, 00:20:06.932 "send_buf_size": 4096, 00:20:06.932 "enable_recv_pipe": true, 00:20:06.932 "enable_quickack": false, 00:20:06.932 "enable_placement_id": 0, 00:20:06.932 "enable_zerocopy_send_server": true, 00:20:06.932 "enable_zerocopy_send_client": false, 00:20:06.932 "zerocopy_threshold": 0, 00:20:06.932 "tls_version": 0, 00:20:06.932 "enable_ktls": false 00:20:06.932 } 00:20:06.932 }, 00:20:06.932 { 00:20:06.932 "method": "sock_impl_set_options", 00:20:06.932 "params": { 00:20:06.932 "impl_name": "posix", 00:20:06.932 "recv_buf_size": 2097152, 00:20:06.932 "send_buf_size": 2097152, 00:20:06.932 "enable_recv_pipe": true, 00:20:06.932 "enable_quickack": false, 00:20:06.932 "enable_placement_id": 0, 00:20:06.932 "enable_zerocopy_send_server": true, 00:20:06.932 "enable_zerocopy_send_client": false, 00:20:06.932 "zerocopy_threshold": 0, 00:20:06.932 "tls_version": 0, 00:20:06.932 "enable_ktls": false 00:20:06.932 } 00:20:06.932 }, 00:20:06.932 { 00:20:06.932 "method": "sock_impl_set_options", 00:20:06.932 "params": { 00:20:06.932 "impl_name": "uring", 00:20:06.932 "recv_buf_size": 2097152, 00:20:06.932 "send_buf_size": 2097152, 00:20:06.932 "enable_recv_pipe": true, 00:20:06.932 "enable_quickack": false, 00:20:06.932 "enable_placement_id": 0, 00:20:06.932 "enable_zerocopy_send_server": false, 00:20:06.932 "enable_zerocopy_send_client": false, 00:20:06.932 "zerocopy_threshold": 0, 00:20:06.932 "tls_version": 0, 00:20:06.932 "enable_ktls": false 00:20:06.932 } 00:20:06.932 } 00:20:06.932 ] 00:20:06.932 }, 00:20:06.932 { 00:20:06.932 "subsystem": "vmd", 00:20:06.932 "config": [] 00:20:06.932 }, 00:20:06.932 { 00:20:06.932 "subsystem": "accel", 00:20:06.932 "config": [ 00:20:06.932 { 00:20:06.932 "method": "accel_set_options", 00:20:06.932 "params": { 00:20:06.932 "small_cache_size": 128, 00:20:06.932 "large_cache_size": 16, 00:20:06.932 "task_count": 2048, 00:20:06.932 "sequence_count": 2048, 00:20:06.932 "buf_count": 2048 00:20:06.932 } 00:20:06.932 } 00:20:06.932 ] 00:20:06.932 }, 00:20:06.932 { 00:20:06.932 "subsystem": "bdev", 00:20:06.932 "config": [ 00:20:06.932 { 00:20:06.932 "method": "bdev_set_options", 00:20:06.932 "params": { 00:20:06.932 "bdev_io_pool_size": 65535, 00:20:06.932 "bdev_io_cache_size": 256, 00:20:06.932 "bdev_auto_examine": true, 00:20:06.932 "iobuf_small_cache_size": 128, 00:20:06.932 "iobuf_large_cache_size": 16 00:20:06.932 } 00:20:06.932 }, 00:20:06.932 { 00:20:06.932 "method": "bdev_raid_set_options", 00:20:06.932 "params": { 00:20:06.932 "process_window_size_kb": 1024, 00:20:06.932 "process_max_bandwidth_mb_sec": 0 00:20:06.932 } 00:20:06.932 }, 00:20:06.932 { 00:20:06.932 "method": "bdev_iscsi_set_options", 00:20:06.932 "params": { 00:20:06.932 "timeout_sec": 30 00:20:06.932 } 00:20:06.932 }, 00:20:06.932 { 00:20:06.932 "method": "bdev_nvme_set_options", 00:20:06.932 "params": { 00:20:06.932 "action_on_timeout": "none", 00:20:06.932 "timeout_us": 0, 00:20:06.932 "timeout_admin_us": 0, 00:20:06.932 "keep_alive_timeout_ms": 10000, 00:20:06.932 "arbitration_burst": 0, 00:20:06.932 "low_priority_weight": 0, 00:20:06.932 "medium_priority_weight": 0, 00:20:06.932 "high_priority_weight": 0, 00:20:06.932 "nvme_adminq_poll_period_us": 10000, 00:20:06.932 "nvme_ioq_poll_period_us": 0, 00:20:06.932 "io_queue_requests": 512, 00:20:06.932 "delay_cmd_submit": true, 00:20:06.932 "transport_retry_count": 4, 00:20:06.932 "bdev_retry_count": 3, 00:20:06.932 "transport_ack_timeout": 0, 00:20:06.932 "ctrlr_loss_timeout_sec": 0, 00:20:06.932 "reconnect_delay_sec": 0, 00:20:06.932 "fast_io_fail_timeout_sec": 0, 00:20:06.932 "disable_auto_failback": false, 00:20:06.932 "generate_uuids": false, 00:20:06.932 "transport_tos": 0, 00:20:06.932 "nvme_error_stat": false, 00:20:06.932 "rdma_srq_size": 0, 00:20:06.932 "io_path_stat": false, 00:20:06.932 "allow_accel_sequence": false, 00:20:06.932 "rdma_max_cq_size": 0, 00:20:06.932 "rdma_cm_event_timeout_ms": 0, 00:20:06.932 "dhchap_digests": [ 00:20:06.932 "sha256", 00:20:06.932 "sha384", 00:20:06.932 "sha512" 00:20:06.932 ], 00:20:06.932 "dhchap_dhgroups": [ 00:20:06.932 "null", 00:20:06.932 "ffdhe2048", 00:20:06.932 "ffdhe3072", 00:20:06.932 "ffdhe4096", 00:20:06.932 "ffdhe6144", 00:20:06.932 "ffdhe8192" 00:20:06.932 ] 00:20:06.932 } 00:20:06.932 }, 00:20:06.932 { 00:20:06.932 "method": "bdev_nvme_attach_controller", 00:20:06.932 "params": { 00:20:06.932 "name": "nvme0", 00:20:06.932 "trtype": "TCP", 00:20:06.932 "adrfam": "IPv4", 00:20:06.932 "traddr": "10.0.0.2", 00:20:06.932 "trsvcid": "4420", 00:20:06.932 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:06.932 "prchk_reftag": false, 00:20:06.932 "prchk_guard": false, 00:20:06.932 "ctrlr_loss_timeout_sec": 0, 00:20:06.932 "reconnect_delay_sec": 0, 00:20:06.932 "fast_io_fail_timeout_sec": 0, 00:20:06.932 "psk": "key0", 00:20:06.932 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:06.932 "hdgst": false, 00:20:06.932 "ddgst": false 00:20:06.932 } 00:20:06.932 }, 00:20:06.932 { 00:20:06.932 "method": "bdev_nvme_set_hotplug", 00:20:06.932 "params": { 00:20:06.932 "period_us": 100000, 00:20:06.933 "enable": false 00:20:06.933 } 00:20:06.933 }, 00:20:06.933 { 00:20:06.933 "method": "bdev_enable_histogram", 00:20:06.933 "params": { 00:20:06.933 "name": "nvme0n1", 00:20:06.933 "enable": true 00:20:06.933 } 00:20:06.933 }, 00:20:06.933 { 00:20:06.933 "method": "bdev_wait_for_examine" 00:20:06.933 } 00:20:06.933 ] 00:20:06.933 }, 00:20:06.933 { 00:20:06.933 "subsystem": "nbd", 00:20:06.933 "config": [] 00:20:06.933 } 00:20:06.933 ] 00:20:06.933 }' 00:20:06.933 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:06.933 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:06.933 [2024-07-25 09:02:13.908695] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:06.933 [2024-07-25 09:02:13.908867] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76770 ] 00:20:07.192 [2024-07-25 09:02:14.076673] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:07.450 [2024-07-25 09:02:14.341563] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:07.706 [2024-07-25 09:02:14.623312] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:07.706 [2024-07-25 09:02:14.743446] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:07.963 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:07.963 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:07.963 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:07.963 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:20:08.220 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.220 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@278 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:08.220 Running I/O for 1 seconds... 00:20:09.592 00:20:09.592 Latency(us) 00:20:09.592 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:09.592 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:09.592 Verification LBA range: start 0x0 length 0x2000 00:20:09.592 nvme0n1 : 1.04 2814.93 11.00 0.00 0.00 44793.97 9413.35 27644.28 00:20:09.592 =================================================================================================================== 00:20:09.592 Total : 2814.93 11.00 0.00 0.00 44793.97 9413.35 27644.28 00:20:09.592 0 00:20:09.592 09:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:20:09.592 09:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:20:09.592 09:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:20:09.592 09:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:20:09.592 09:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:20:09.592 09:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:20:09.592 09:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:09.592 09:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:20:09.592 09:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:20:09.592 09:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:20:09.592 09:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:09.592 nvmf_trace.0 00:20:09.592 09:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:20:09.592 09:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 76770 00:20:09.592 09:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 76770 ']' 00:20:09.592 09:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 76770 00:20:09.592 09:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:09.592 09:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:09.592 09:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76770 00:20:09.592 09:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:09.592 killing process with pid 76770 00:20:09.592 09:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:09.592 09:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76770' 00:20:09.592 Received shutdown signal, test time was about 1.000000 seconds 00:20:09.592 00:20:09.592 Latency(us) 00:20:09.592 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:09.592 =================================================================================================================== 00:20:09.592 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:09.592 09:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 76770 00:20:09.592 09:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 76770 00:20:10.528 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:20:10.528 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:10.528 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:20:10.786 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:10.786 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:20:10.786 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:10.786 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:10.786 rmmod nvme_tcp 00:20:10.786 rmmod nvme_fabrics 00:20:10.786 rmmod nvme_keyring 00:20:10.786 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:10.786 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:20:10.786 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:20:10.786 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 76732 ']' 00:20:10.786 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 76732 00:20:10.786 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 76732 ']' 00:20:10.786 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 76732 00:20:10.786 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:10.786 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:10.786 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76732 00:20:10.786 killing process with pid 76732 00:20:10.786 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:10.786 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:10.786 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76732' 00:20:10.786 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 76732 00:20:10.786 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 76732 00:20:12.161 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:12.161 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:12.161 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:12.161 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:12.161 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:12.161 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:12.161 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:12.161 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:12.162 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:12.162 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.xAdzEVdPhK /tmp/tmp.fK4eOd9TUo /tmp/tmp.wYjyZPEOU0 00:20:12.162 00:20:12.162 real 1m48.083s 00:20:12.162 user 2m53.234s 00:20:12.162 sys 0m27.394s 00:20:12.162 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:12.162 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:12.162 ************************************ 00:20:12.162 END TEST nvmf_tls 00:20:12.162 ************************************ 00:20:12.162 09:02:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:12.162 09:02:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:12.162 09:02:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:12.162 09:02:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:12.162 ************************************ 00:20:12.162 START TEST nvmf_fips 00:20:12.162 ************************************ 00:20:12.162 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:12.422 * Looking for test storage... 00:20:12.422 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:20:12.422 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:12.422 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:20:12.422 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:12.422 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:12.422 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:12.422 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:12.422 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:12.422 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:12.422 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:12.422 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:12.422 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:12.422 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:12.422 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:20:12.422 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:20:12.422 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:12.422 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:12.422 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:12.422 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:12.422 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:12.422 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:12.422 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:12.422 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:12.422 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.422 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.422 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.422 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:20:12.422 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.422 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:20:12.422 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:12.422 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:12.422 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:12.422 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:12.422 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:12.422 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:12.422 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:12.422 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:12.422 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:12.422 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:20:12.422 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:20:12.422 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:20:12.422 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:20:12.422 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:20:12.422 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:20:12.422 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:20:12.422 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:20:12.422 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:20:12.422 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:20:12.422 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:20:12.422 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:20:12.422 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:20:12.422 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:20:12.422 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:20:12.422 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:20:12.422 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:20:12.422 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:20:12.422 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:20:12.422 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:12.422 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:20:12.422 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:20:12.422 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:12.422 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:20:12.422 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:20:12.423 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:20:12.423 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:20:12.423 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:12.423 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:20:12.423 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:20:12.423 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:12.423 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:20:12.423 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:20:12.423 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:12.423 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:20:12.423 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:12.423 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:12.423 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:12.423 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:20:12.423 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:20:12.423 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:12.423 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:12.423 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:12.423 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:20:12.423 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:12.423 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:20:12.423 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:20:12.423 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:12.423 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:20:12.423 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:20:12.423 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:20:12.423 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:20:12.423 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:20:12.423 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:20:12.423 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:12.423 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:12.423 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:12.423 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:20:12.423 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:12.423 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:20:12.423 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:20:12.423 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:12.423 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:20:12.423 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:12.423 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:12.423 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:20:12.423 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:20:12.423 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:20:12.423 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@37 -- # cat 00:20:12.423 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:20:12.423 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:20:12.423 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:12.423 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:20:12.423 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:20:12.423 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:20:12.423 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:20:12.423 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:20:12.423 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:20:12.423 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:12.423 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:20:12.423 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:20:12.423 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # : 00:20:12.423 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:12.423 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:20:12.423 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:12.423 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:20:12.423 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:12.423 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:20:12.423 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:12.423 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:20:12.423 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:20:12.423 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:20:12.683 Error setting digest 00:20:12.683 00126F08DA7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:20:12.683 00126F08DA7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:20:12.683 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:20:12.683 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:12.683 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:12.683 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:12.683 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:20:12.683 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:12.683 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:12.683 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:12.683 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:12.683 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:12.683 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:12.683 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:12.683 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:12.683 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:12.683 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:12.683 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:12.683 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:12.683 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:12.683 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:12.683 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:12.683 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:12.683 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:12.683 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:12.683 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:12.683 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:12.683 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:12.683 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:12.683 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:12.683 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:12.683 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:12.683 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:12.683 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:12.683 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:12.683 Cannot find device "nvmf_tgt_br" 00:20:12.683 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # true 00:20:12.683 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:12.683 Cannot find device "nvmf_tgt_br2" 00:20:12.683 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # true 00:20:12.683 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:12.683 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:12.683 Cannot find device "nvmf_tgt_br" 00:20:12.683 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # true 00:20:12.683 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:12.683 Cannot find device "nvmf_tgt_br2" 00:20:12.683 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # true 00:20:12.683 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:12.683 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:12.683 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:12.683 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:12.683 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:20:12.683 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:12.683 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:12.683 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:20:12.683 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:12.683 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:12.683 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:12.683 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:12.683 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:12.683 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:12.683 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:12.683 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:12.683 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:12.683 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:12.683 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:12.683 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:12.683 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:12.683 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:12.683 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:12.683 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:12.683 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:12.683 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:12.683 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:12.683 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:12.943 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:12.943 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:12.943 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:12.943 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:12.943 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:12.943 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.099 ms 00:20:12.943 00:20:12.943 --- 10.0.0.2 ping statistics --- 00:20:12.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:12.943 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:20:12.943 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:12.943 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:12.943 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:20:12.943 00:20:12.943 --- 10.0.0.3 ping statistics --- 00:20:12.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:12.943 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:20:12.943 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:12.943 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:12.943 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:20:12.943 00:20:12.943 --- 10.0.0.1 ping statistics --- 00:20:12.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:12.943 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:20:12.943 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:12.943 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@433 -- # return 0 00:20:12.943 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:12.943 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:12.943 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:12.943 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:12.943 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:12.943 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:12.943 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:12.943 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:20:12.943 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:12.943 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:12.943 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:12.943 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=77062 00:20:12.943 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:12.943 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 77062 00:20:12.943 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 77062 ']' 00:20:12.943 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:12.943 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:12.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:12.943 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:12.943 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:12.943 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:12.943 [2024-07-25 09:02:20.034355] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:12.943 [2024-07-25 09:02:20.034528] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:13.201 [2024-07-25 09:02:20.216163] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:13.461 [2024-07-25 09:02:20.498613] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:13.461 [2024-07-25 09:02:20.498687] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:13.461 [2024-07-25 09:02:20.498710] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:13.461 [2024-07-25 09:02:20.498741] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:13.461 [2024-07-25 09:02:20.498754] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:13.461 [2024-07-25 09:02:20.498803] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:13.720 [2024-07-25 09:02:20.705095] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:13.978 09:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:13.978 09:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:20:13.978 09:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:13.978 09:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:13.978 09:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:13.978 09:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:13.978 09:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:20:13.978 09:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:13.978 09:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:20:13.978 09:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:13.978 09:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:20:13.979 09:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:20:13.979 09:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:20:13.979 09:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:14.247 [2024-07-25 09:02:21.166290] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:14.247 [2024-07-25 09:02:21.182992] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:14.247 [2024-07-25 09:02:21.183505] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:14.247 [2024-07-25 09:02:21.252732] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:14.247 malloc0 00:20:14.247 09:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:14.247 09:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=77102 00:20:14.247 09:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:14.247 09:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 77102 /var/tmp/bdevperf.sock 00:20:14.247 09:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 77102 ']' 00:20:14.247 09:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:14.247 09:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:14.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:14.247 09:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:14.247 09:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:14.247 09:02:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:14.514 [2024-07-25 09:02:21.432635] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:14.514 [2024-07-25 09:02:21.432849] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77102 ] 00:20:14.514 [2024-07-25 09:02:21.606620] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:14.778 [2024-07-25 09:02:21.880737] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:15.036 [2024-07-25 09:02:22.082804] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:15.305 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:15.305 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:20:15.305 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:20:15.566 [2024-07-25 09:02:22.554127] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:15.566 [2024-07-25 09:02:22.554451] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:15.566 TLSTESTn1 00:20:15.566 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:15.824 Running I/O for 10 seconds... 00:20:25.796 00:20:25.796 Latency(us) 00:20:25.796 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:25.796 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:25.796 Verification LBA range: start 0x0 length 0x2000 00:20:25.796 TLSTESTn1 : 10.04 2691.80 10.51 0.00 0.00 47433.60 11558.17 50522.30 00:20:25.796 =================================================================================================================== 00:20:25.796 Total : 2691.80 10.51 0.00 0.00 47433.60 11558.17 50522.30 00:20:25.796 0 00:20:25.796 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:20:25.796 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:20:25.796 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:20:25.796 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:20:25.796 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:20:25.796 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:25.796 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:20:25.796 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:20:25.796 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:20:25.796 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:25.796 nvmf_trace.0 00:20:25.796 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:20:25.796 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 77102 00:20:25.796 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 77102 ']' 00:20:25.796 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 77102 00:20:25.796 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:20:25.796 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:25.796 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77102 00:20:26.055 killing process with pid 77102 00:20:26.055 Received shutdown signal, test time was about 10.000000 seconds 00:20:26.055 00:20:26.055 Latency(us) 00:20:26.055 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:26.055 =================================================================================================================== 00:20:26.055 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:26.055 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:26.055 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:26.055 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77102' 00:20:26.055 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 77102 00:20:26.055 [2024-07-25 09:02:32.933559] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:26.055 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 77102 00:20:27.431 09:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:20:27.431 09:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:27.431 09:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:20:27.431 09:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:27.431 09:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:20:27.431 09:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:27.431 09:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:27.431 rmmod nvme_tcp 00:20:27.431 rmmod nvme_fabrics 00:20:27.431 rmmod nvme_keyring 00:20:27.431 09:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:27.431 09:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:20:27.431 09:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:20:27.431 09:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 77062 ']' 00:20:27.431 09:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 77062 00:20:27.431 09:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 77062 ']' 00:20:27.431 09:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 77062 00:20:27.431 09:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:20:27.431 09:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:27.431 09:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77062 00:20:27.431 killing process with pid 77062 00:20:27.431 09:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:27.431 09:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:27.431 09:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77062' 00:20:27.431 09:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 77062 00:20:27.431 [2024-07-25 09:02:34.311659] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:27.431 09:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 77062 00:20:28.807 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:28.807 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:28.807 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:28.807 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:28.807 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:28.807 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:28.807 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:28.807 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:28.807 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:28.807 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:20:28.807 00:20:28.807 real 0m16.411s 00:20:28.807 user 0m23.529s 00:20:28.807 sys 0m5.339s 00:20:28.807 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:28.807 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:28.807 ************************************ 00:20:28.807 END TEST nvmf_fips 00:20:28.807 ************************************ 00:20:28.807 09:02:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@45 -- # '[' 1 -eq 1 ']' 00:20:28.807 09:02:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@46 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:20:28.807 09:02:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:28.807 09:02:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:28.807 09:02:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:28.807 ************************************ 00:20:28.807 START TEST nvmf_fuzz 00:20:28.807 ************************************ 00:20:28.807 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:20:28.807 * Looking for test storage... 00:20:28.807 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:28.807 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:28.807 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:20:28.807 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:28.807 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:28.807 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:28.807 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:28.807 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:28.807 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:28.807 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:28.807 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:28.807 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:28.807 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:28.807 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:20:28.807 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:20:28.807 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:28.807 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:28.807 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:28.807 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:28.807 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:28.807 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:28.807 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:28.807 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:28.807 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.807 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.807 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.807 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:20:28.807 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.807 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:20:28.807 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:28.807 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:28.807 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:28.807 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:28.807 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:28.807 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:28.807 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:28.807 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:28.808 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:20:28.808 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:28.808 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:28.808 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:28.808 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:28.808 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:28.808 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:28.808 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:28.808 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:28.808 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:28.808 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:28.808 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:28.808 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:28.808 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:28.808 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:28.808 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:28.808 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:28.808 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:28.808 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:28.808 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:28.808 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:28.808 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:28.808 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:28.808 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:28.808 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:28.808 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:28.808 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:28.808 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:28.808 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:28.808 Cannot find device "nvmf_tgt_br" 00:20:28.808 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@155 -- # true 00:20:28.808 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:28.808 Cannot find device "nvmf_tgt_br2" 00:20:28.808 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@156 -- # true 00:20:28.808 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:28.808 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:28.808 Cannot find device "nvmf_tgt_br" 00:20:28.808 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@158 -- # true 00:20:28.808 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:29.067 Cannot find device "nvmf_tgt_br2" 00:20:29.067 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@159 -- # true 00:20:29.067 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:29.067 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:29.067 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:29.067 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:29.067 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@162 -- # true 00:20:29.067 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:29.067 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:29.067 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@163 -- # true 00:20:29.067 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:29.067 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:29.067 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:29.067 09:02:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:29.067 09:02:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:29.067 09:02:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:29.067 09:02:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:29.067 09:02:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:29.067 09:02:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:29.067 09:02:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:29.067 09:02:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:29.067 09:02:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:29.067 09:02:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:29.067 09:02:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:29.067 09:02:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:29.067 09:02:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:29.067 09:02:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:29.067 09:02:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:29.067 09:02:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:29.067 09:02:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:29.067 09:02:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:29.067 09:02:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:29.067 09:02:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:29.067 09:02:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:29.067 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:29.067 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:20:29.067 00:20:29.067 --- 10.0.0.2 ping statistics --- 00:20:29.067 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:29.067 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:20:29.067 09:02:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:29.067 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:29.067 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:20:29.067 00:20:29.067 --- 10.0.0.3 ping statistics --- 00:20:29.067 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:29.067 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:20:29.067 09:02:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:29.067 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:29.067 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:20:29.067 00:20:29.067 --- 10.0.0.1 ping statistics --- 00:20:29.067 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:29.067 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:20:29.067 09:02:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:29.067 09:02:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@433 -- # return 0 00:20:29.067 09:02:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:29.067 09:02:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:29.067 09:02:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:29.067 09:02:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:29.067 09:02:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:29.067 09:02:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:29.067 09:02:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:29.325 09:02:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=77460 00:20:29.325 09:02:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:29.325 09:02:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:20:29.325 09:02:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 77460 00:20:29.325 09:02:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@831 -- # '[' -z 77460 ']' 00:20:29.325 09:02:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:29.325 09:02:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:29.326 09:02:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:29.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:29.326 09:02:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:29.326 09:02:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:30.260 09:02:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:30.260 09:02:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # return 0 00:20:30.260 09:02:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:30.260 09:02:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.260 09:02:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:30.260 09:02:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.260 09:02:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:20:30.260 09:02:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.260 09:02:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:30.518 Malloc0 00:20:30.518 09:02:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.518 09:02:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:30.518 09:02:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.518 09:02:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:30.518 09:02:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.518 09:02:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:30.518 09:02:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.518 09:02:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:30.518 09:02:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.518 09:02:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:30.518 09:02:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.518 09:02:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:30.518 09:02:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.518 09:02:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:20:30.518 09:02:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:20:31.454 Shutting down the fuzz application 00:20:31.454 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:20:32.390 Shutting down the fuzz application 00:20:32.390 09:02:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:32.390 09:02:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.390 09:02:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:32.390 09:02:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.390 09:02:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:20:32.390 09:02:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:20:32.390 09:02:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:32.390 09:02:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:20:32.390 09:02:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:32.390 09:02:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:20:32.390 09:02:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:32.390 09:02:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:32.390 rmmod nvme_tcp 00:20:32.390 rmmod nvme_fabrics 00:20:32.390 rmmod nvme_keyring 00:20:32.390 09:02:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:32.390 09:02:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:20:32.390 09:02:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:20:32.390 09:02:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 77460 ']' 00:20:32.390 09:02:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 77460 00:20:32.390 09:02:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@950 -- # '[' -z 77460 ']' 00:20:32.390 09:02:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # kill -0 77460 00:20:32.390 09:02:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # uname 00:20:32.390 09:02:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:32.390 09:02:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77460 00:20:32.390 09:02:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:32.390 09:02:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:32.390 killing process with pid 77460 00:20:32.390 09:02:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77460' 00:20:32.390 09:02:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@969 -- # kill 77460 00:20:32.390 09:02:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@974 -- # wait 77460 00:20:33.768 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:33.768 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:33.768 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:33.768 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:33.768 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:33.768 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:33.768 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:33.768 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:33.768 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:33.768 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:20:33.768 00:20:33.768 real 0m5.053s 00:20:33.768 user 0m6.108s 00:20:33.768 sys 0m0.920s 00:20:33.768 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:33.768 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:33.768 ************************************ 00:20:33.768 END TEST nvmf_fuzz 00:20:33.768 ************************************ 00:20:33.769 09:02:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:20:33.769 09:02:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:33.769 09:02:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:33.769 09:02:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:33.769 ************************************ 00:20:33.769 START TEST nvmf_multiconnection 00:20:33.769 ************************************ 00:20:33.769 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:20:34.029 * Looking for test storage... 00:20:34.029 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:34.029 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:34.029 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:20:34.029 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:34.029 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:34.029 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:34.029 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:34.029 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:34.029 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:34.029 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:34.029 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:34.029 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:34.029 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:34.029 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:20:34.029 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:20:34.029 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:34.029 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:34.029 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:34.029 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:34.029 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:34.029 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:34.029 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:34.029 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:34.029 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.029 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.029 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.029 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:20:34.029 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.029 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:20:34.029 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:34.029 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:34.029 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:34.029 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:34.029 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:34.029 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:34.029 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:34.029 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:34.029 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:34.029 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:34.029 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:20:34.029 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:20:34.029 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:34.029 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:34.029 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:34.029 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:34.029 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:34.029 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:34.029 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:34.029 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:34.029 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:34.029 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:34.029 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:34.029 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:34.029 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:34.029 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:34.029 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:34.029 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:34.029 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:34.029 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:34.029 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:34.029 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:34.029 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:34.029 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:34.029 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:34.029 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:34.029 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:34.029 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:34.029 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:34.029 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:34.030 Cannot find device "nvmf_tgt_br" 00:20:34.030 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@155 -- # true 00:20:34.030 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:34.030 Cannot find device "nvmf_tgt_br2" 00:20:34.030 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@156 -- # true 00:20:34.030 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:34.030 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:34.030 Cannot find device "nvmf_tgt_br" 00:20:34.030 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@158 -- # true 00:20:34.030 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:34.030 Cannot find device "nvmf_tgt_br2" 00:20:34.030 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@159 -- # true 00:20:34.030 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:34.030 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:34.030 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:34.030 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:34.030 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@162 -- # true 00:20:34.030 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:34.030 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:34.030 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@163 -- # true 00:20:34.030 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:34.030 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:34.030 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:34.030 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:34.030 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:34.030 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:34.030 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:34.030 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:34.030 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:34.030 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:34.030 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:34.030 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:34.030 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:34.289 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:34.289 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:34.289 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:34.289 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:34.289 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:34.289 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:34.289 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:34.289 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:34.289 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:34.289 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:34.289 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:34.289 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:34.289 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:20:34.289 00:20:34.289 --- 10.0.0.2 ping statistics --- 00:20:34.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:34.289 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:20:34.289 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:34.289 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:34.289 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:20:34.289 00:20:34.289 --- 10.0.0.3 ping statistics --- 00:20:34.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:34.289 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:20:34.289 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:34.290 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:34.290 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:20:34.290 00:20:34.290 --- 10.0.0.1 ping statistics --- 00:20:34.290 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:34.290 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:20:34.290 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:34.290 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@433 -- # return 0 00:20:34.290 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:34.290 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:34.290 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:34.290 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:34.290 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:34.290 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:34.290 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:34.290 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:20:34.290 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:34.290 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:34.290 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:34.290 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=77701 00:20:34.290 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:34.290 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 77701 00:20:34.290 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@831 -- # '[' -z 77701 ']' 00:20:34.290 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:34.290 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:34.290 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:34.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:34.290 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:34.290 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:34.290 [2024-07-25 09:02:41.353836] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:34.290 [2024-07-25 09:02:41.354016] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:34.549 [2024-07-25 09:02:41.523300] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:34.809 [2024-07-25 09:02:41.791558] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:34.809 [2024-07-25 09:02:41.791627] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:34.809 [2024-07-25 09:02:41.791645] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:34.809 [2024-07-25 09:02:41.791661] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:34.809 [2024-07-25 09:02:41.791677] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:34.809 [2024-07-25 09:02:41.791872] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:34.809 [2024-07-25 09:02:41.791935] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:34.809 [2024-07-25 09:02:41.792412] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:34.809 [2024-07-25 09:02:41.792440] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:35.083 [2024-07-25 09:02:41.996462] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:35.358 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:35.358 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # return 0 00:20:35.358 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:35.358 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:35.358 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:35.358 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:35.358 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:35.358 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.358 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:35.358 [2024-07-25 09:02:42.292455] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:35.358 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.358 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:20:35.358 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:35.358 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:35.358 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.358 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:35.358 Malloc1 00:20:35.358 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.358 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:20:35.358 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.358 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:35.358 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.358 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:35.358 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.358 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:35.358 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.358 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:35.358 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.359 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:35.359 [2024-07-25 09:02:42.413594] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:35.359 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.359 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:35.359 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:20:35.359 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.359 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:35.618 Malloc2 00:20:35.618 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.618 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:20:35.618 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.618 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:35.618 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.618 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:20:35.618 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.618 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:35.618 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.618 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:35.618 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.618 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:35.618 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.618 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:35.618 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:20:35.618 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.618 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:35.618 Malloc3 00:20:35.618 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.618 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:20:35.618 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.618 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:35.618 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.618 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:20:35.618 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.618 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:35.618 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.618 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:20:35.618 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.618 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:35.618 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.618 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:35.618 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:20:35.618 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.618 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:35.618 Malloc4 00:20:35.618 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.618 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:20:35.618 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.618 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:35.618 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.618 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:20:35.618 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.618 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:35.618 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.618 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:20:35.618 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.618 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:35.618 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.618 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:35.618 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:20:35.618 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.618 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:35.877 Malloc5 00:20:35.877 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.877 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:20:35.877 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.877 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:35.877 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.877 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:20:35.877 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.877 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:35.877 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.877 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:20:35.877 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.877 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:35.877 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.877 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:35.877 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:20:35.877 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.877 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:35.877 Malloc6 00:20:35.877 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.877 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:20:35.877 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.877 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:35.877 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.877 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:20:35.877 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.877 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:35.877 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.877 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:20:35.877 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.877 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:35.877 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.877 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:35.877 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:20:35.877 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.877 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:36.136 Malloc7 00:20:36.136 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.136 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:20:36.136 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.136 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:36.136 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.136 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:20:36.136 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.136 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:36.136 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.136 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:20:36.136 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.136 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:36.136 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.136 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:36.136 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:20:36.136 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.136 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:36.136 Malloc8 00:20:36.136 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.136 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:20:36.136 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.136 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:36.136 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.136 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:20:36.136 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.136 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:36.136 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.136 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:20:36.136 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.136 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:36.136 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.136 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:36.136 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:20:36.136 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.136 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:36.136 Malloc9 00:20:36.136 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.136 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:20:36.136 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.136 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:36.136 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.136 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:20:36.136 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.136 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:36.136 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.136 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:20:36.136 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.136 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:36.136 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.137 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:36.137 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:20:36.137 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.137 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:36.395 Malloc10 00:20:36.395 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.395 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:20:36.395 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.395 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:36.395 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.395 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:20:36.395 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.395 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:36.395 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.395 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:20:36.395 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.395 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:36.395 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.395 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:36.395 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:20:36.395 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.395 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:36.395 Malloc11 00:20:36.395 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.395 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:20:36.395 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.395 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:36.395 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.395 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:20:36.395 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.395 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:36.395 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.395 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:20:36.395 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.395 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:36.395 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.395 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:20:36.395 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:36.395 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid=a4705431-95c9-4bc1-9185-4a8233d2d7f5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:20:36.654 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:20:36.654 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:20:36.654 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:20:36.654 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:20:36.654 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:20:38.557 09:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:20:38.557 09:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:20:38.557 09:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:20:38.557 09:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:20:38.557 09:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:20:38.557 09:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:20:38.557 09:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:38.557 09:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid=a4705431-95c9-4bc1-9185-4a8233d2d7f5 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:20:38.816 09:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:20:38.816 09:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:20:38.816 09:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:20:38.816 09:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:20:38.816 09:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:20:40.717 09:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:20:40.717 09:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:20:40.717 09:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:20:40.717 09:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:20:40.717 09:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:20:40.717 09:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:20:40.717 09:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:40.717 09:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid=a4705431-95c9-4bc1-9185-4a8233d2d7f5 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:20:40.975 09:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:20:40.975 09:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:20:40.975 09:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:20:40.975 09:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:20:40.975 09:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:20:42.873 09:02:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:20:42.873 09:02:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:20:42.873 09:02:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:20:42.873 09:02:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:20:42.873 09:02:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:20:42.873 09:02:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:20:42.873 09:02:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:42.873 09:02:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid=a4705431-95c9-4bc1-9185-4a8233d2d7f5 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:20:43.132 09:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:20:43.132 09:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:20:43.132 09:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:20:43.132 09:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:20:43.132 09:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:20:45.033 09:02:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:20:45.033 09:02:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:20:45.033 09:02:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:20:45.033 09:02:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:20:45.033 09:02:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:20:45.033 09:02:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:20:45.033 09:02:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:45.033 09:02:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid=a4705431-95c9-4bc1-9185-4a8233d2d7f5 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:20:45.292 09:02:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:20:45.292 09:02:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:20:45.292 09:02:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:20:45.292 09:02:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:20:45.292 09:02:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:20:47.204 09:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:20:47.204 09:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:20:47.204 09:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:20:47.204 09:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:20:47.204 09:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:20:47.204 09:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:20:47.204 09:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:47.204 09:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid=a4705431-95c9-4bc1-9185-4a8233d2d7f5 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:20:47.462 09:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:20:47.462 09:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:20:47.462 09:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:20:47.462 09:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:20:47.462 09:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:20:49.424 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:20:49.424 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:20:49.424 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:20:49.424 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:20:49.424 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:20:49.424 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:20:49.424 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:49.424 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid=a4705431-95c9-4bc1-9185-4a8233d2d7f5 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:20:49.424 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:20:49.424 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:20:49.424 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:20:49.424 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:20:49.424 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:20:51.954 09:02:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:20:51.954 09:02:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:20:51.954 09:02:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:20:51.954 09:02:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:20:51.954 09:02:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:20:51.954 09:02:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:20:51.954 09:02:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:51.954 09:02:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid=a4705431-95c9-4bc1-9185-4a8233d2d7f5 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:20:51.954 09:02:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:20:51.954 09:02:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:20:51.954 09:02:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:20:51.954 09:02:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:20:51.954 09:02:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:20:53.854 09:03:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:20:53.854 09:03:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:20:53.855 09:03:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:20:53.855 09:03:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:20:53.855 09:03:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:20:53.855 09:03:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:20:53.855 09:03:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:53.855 09:03:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid=a4705431-95c9-4bc1-9185-4a8233d2d7f5 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:20:53.855 09:03:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:20:53.855 09:03:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:20:53.855 09:03:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:20:53.855 09:03:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:20:53.855 09:03:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:20:55.756 09:03:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:20:55.756 09:03:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:20:55.756 09:03:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:20:56.014 09:03:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:20:56.015 09:03:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:20:56.015 09:03:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:20:56.015 09:03:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:56.015 09:03:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid=a4705431-95c9-4bc1-9185-4a8233d2d7f5 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:20:56.015 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:20:56.015 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:20:56.015 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:20:56.015 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:20:56.015 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:20:58.551 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:20:58.551 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:20:58.551 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:20:58.551 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:20:58.551 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:20:58.551 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:20:58.551 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:58.551 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid=a4705431-95c9-4bc1-9185-4a8233d2d7f5 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:20:58.551 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:20:58.551 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:20:58.551 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:20:58.551 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:20:58.551 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:21:00.453 09:03:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:21:00.453 09:03:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:21:00.453 09:03:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:21:00.453 09:03:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:21:00.453 09:03:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:21:00.453 09:03:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:21:00.453 09:03:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:21:00.453 [global] 00:21:00.453 thread=1 00:21:00.453 invalidate=1 00:21:00.453 rw=read 00:21:00.453 time_based=1 00:21:00.453 runtime=10 00:21:00.453 ioengine=libaio 00:21:00.453 direct=1 00:21:00.453 bs=262144 00:21:00.453 iodepth=64 00:21:00.453 norandommap=1 00:21:00.453 numjobs=1 00:21:00.453 00:21:00.453 [job0] 00:21:00.453 filename=/dev/nvme0n1 00:21:00.453 [job1] 00:21:00.453 filename=/dev/nvme10n1 00:21:00.453 [job2] 00:21:00.453 filename=/dev/nvme1n1 00:21:00.453 [job3] 00:21:00.453 filename=/dev/nvme2n1 00:21:00.453 [job4] 00:21:00.453 filename=/dev/nvme3n1 00:21:00.453 [job5] 00:21:00.453 filename=/dev/nvme4n1 00:21:00.453 [job6] 00:21:00.453 filename=/dev/nvme5n1 00:21:00.453 [job7] 00:21:00.453 filename=/dev/nvme6n1 00:21:00.453 [job8] 00:21:00.453 filename=/dev/nvme7n1 00:21:00.453 [job9] 00:21:00.453 filename=/dev/nvme8n1 00:21:00.453 [job10] 00:21:00.453 filename=/dev/nvme9n1 00:21:00.453 Could not set queue depth (nvme0n1) 00:21:00.453 Could not set queue depth (nvme10n1) 00:21:00.453 Could not set queue depth (nvme1n1) 00:21:00.453 Could not set queue depth (nvme2n1) 00:21:00.453 Could not set queue depth (nvme3n1) 00:21:00.453 Could not set queue depth (nvme4n1) 00:21:00.453 Could not set queue depth (nvme5n1) 00:21:00.453 Could not set queue depth (nvme6n1) 00:21:00.453 Could not set queue depth (nvme7n1) 00:21:00.453 Could not set queue depth (nvme8n1) 00:21:00.453 Could not set queue depth (nvme9n1) 00:21:00.453 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:00.453 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:00.453 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:00.453 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:00.453 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:00.453 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:00.453 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:00.453 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:00.453 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:00.453 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:00.453 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:00.453 fio-3.35 00:21:00.453 Starting 11 threads 00:21:12.660 00:21:12.660 job0: (groupid=0, jobs=1): err= 0: pid=78156: Thu Jul 25 09:03:17 2024 00:21:12.660 read: IOPS=411, BW=103MiB/s (108MB/s)(1041MiB/10123msec) 00:21:12.660 slat (usec): min=17, max=90636, avg=2410.87, stdev=5708.03 00:21:12.660 clat (msec): min=58, max=263, avg=153.01, stdev=16.45 00:21:12.660 lat (msec): min=62, max=263, avg=155.42, stdev=16.66 00:21:12.660 clat percentiles (msec): 00:21:12.660 | 1.00th=[ 77], 5.00th=[ 134], 10.00th=[ 142], 20.00th=[ 146], 00:21:12.660 | 30.00th=[ 148], 40.00th=[ 150], 50.00th=[ 153], 60.00th=[ 155], 00:21:12.660 | 70.00th=[ 157], 80.00th=[ 161], 90.00th=[ 169], 95.00th=[ 176], 00:21:12.660 | 99.00th=[ 197], 99.50th=[ 222], 99.90th=[ 259], 99.95th=[ 259], 00:21:12.660 | 99.99th=[ 264] 00:21:12.660 bw ( KiB/s): min=98816, max=114176, per=6.75%, avg=104960.00, stdev=3767.27, samples=20 00:21:12.660 iops : min= 386, max= 446, avg=409.90, stdev=14.81, samples=20 00:21:12.660 lat (msec) : 100=1.54%, 250=98.32%, 500=0.14% 00:21:12.660 cpu : usr=0.22%, sys=1.81%, ctx=1027, majf=0, minf=4097 00:21:12.660 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:21:12.660 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:12.660 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:12.660 issued rwts: total=4164,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:12.660 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:12.660 job1: (groupid=0, jobs=1): err= 0: pid=78157: Thu Jul 25 09:03:17 2024 00:21:12.660 read: IOPS=716, BW=179MiB/s (188MB/s)(1795MiB/10019msec) 00:21:12.660 slat (usec): min=17, max=156604, avg=1362.80, stdev=4000.58 00:21:12.660 clat (msec): min=9, max=361, avg=87.84, stdev=42.92 00:21:12.660 lat (msec): min=9, max=361, avg=89.20, stdev=43.59 00:21:12.660 clat percentiles (msec): 00:21:12.660 | 1.00th=[ 32], 5.00th=[ 67], 10.00th=[ 70], 20.00th=[ 72], 00:21:12.660 | 30.00th=[ 74], 40.00th=[ 75], 50.00th=[ 77], 60.00th=[ 79], 00:21:12.660 | 70.00th=[ 80], 80.00th=[ 82], 90.00th=[ 89], 95.00th=[ 220], 00:21:12.660 | 99.00th=[ 247], 99.50th=[ 275], 99.90th=[ 284], 99.95th=[ 284], 00:21:12.660 | 99.99th=[ 363] 00:21:12.661 bw ( KiB/s): min=66180, max=218112, per=11.72%, avg=182191.75, stdev=57926.32, samples=20 00:21:12.661 iops : min= 258, max= 852, avg=711.60, stdev=226.41, samples=20 00:21:12.661 lat (msec) : 10=0.01%, 20=0.56%, 50=1.13%, 100=89.18%, 250=8.22% 00:21:12.661 lat (msec) : 500=0.91% 00:21:12.661 cpu : usr=0.31%, sys=3.33%, ctx=1531, majf=0, minf=4097 00:21:12.661 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:21:12.661 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:12.661 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:12.661 issued rwts: total=7181,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:12.661 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:12.661 job2: (groupid=0, jobs=1): err= 0: pid=78158: Thu Jul 25 09:03:17 2024 00:21:12.661 read: IOPS=409, BW=102MiB/s (107MB/s)(1037MiB/10124msec) 00:21:12.661 slat (usec): min=18, max=65983, avg=2406.01, stdev=5531.66 00:21:12.661 clat (msec): min=62, max=274, avg=153.55, stdev=16.17 00:21:12.661 lat (msec): min=62, max=274, avg=155.96, stdev=16.41 00:21:12.661 clat percentiles (msec): 00:21:12.661 | 1.00th=[ 92], 5.00th=[ 136], 10.00th=[ 142], 20.00th=[ 146], 00:21:12.661 | 30.00th=[ 148], 40.00th=[ 150], 50.00th=[ 153], 60.00th=[ 155], 00:21:12.661 | 70.00th=[ 159], 80.00th=[ 161], 90.00th=[ 169], 95.00th=[ 176], 00:21:12.661 | 99.00th=[ 201], 99.50th=[ 234], 99.90th=[ 268], 99.95th=[ 268], 00:21:12.661 | 99.99th=[ 275] 00:21:12.661 bw ( KiB/s): min=99840, max=110080, per=6.73%, avg=104544.35, stdev=3056.44, samples=20 00:21:12.661 iops : min= 390, max= 430, avg=408.35, stdev=11.94, samples=20 00:21:12.661 lat (msec) : 100=1.47%, 250=98.29%, 500=0.24% 00:21:12.661 cpu : usr=0.33%, sys=1.84%, ctx=961, majf=0, minf=4097 00:21:12.661 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:21:12.661 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:12.661 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:12.661 issued rwts: total=4148,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:12.661 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:12.661 job3: (groupid=0, jobs=1): err= 0: pid=78159: Thu Jul 25 09:03:17 2024 00:21:12.661 read: IOPS=404, BW=101MiB/s (106MB/s)(1024MiB/10124msec) 00:21:12.661 slat (usec): min=18, max=92530, avg=2437.42, stdev=5795.77 00:21:12.661 clat (msec): min=111, max=286, avg=155.51, stdev=15.13 00:21:12.661 lat (msec): min=111, max=290, avg=157.95, stdev=15.27 00:21:12.661 clat percentiles (msec): 00:21:12.661 | 1.00th=[ 125], 5.00th=[ 138], 10.00th=[ 142], 20.00th=[ 146], 00:21:12.661 | 30.00th=[ 150], 40.00th=[ 153], 50.00th=[ 155], 60.00th=[ 157], 00:21:12.661 | 70.00th=[ 159], 80.00th=[ 163], 90.00th=[ 169], 95.00th=[ 184], 00:21:12.661 | 99.00th=[ 205], 99.50th=[ 228], 99.90th=[ 275], 99.95th=[ 275], 00:21:12.661 | 99.99th=[ 288] 00:21:12.661 bw ( KiB/s): min=79872, max=108544, per=6.64%, avg=103224.05, stdev=6252.46, samples=20 00:21:12.661 iops : min= 312, max= 424, avg=403.20, stdev=24.42, samples=20 00:21:12.661 lat (msec) : 250=99.66%, 500=0.34% 00:21:12.661 cpu : usr=0.19%, sys=1.39%, ctx=1027, majf=0, minf=4097 00:21:12.661 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:21:12.661 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:12.661 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:12.661 issued rwts: total=4096,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:12.661 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:12.661 job4: (groupid=0, jobs=1): err= 0: pid=78160: Thu Jul 25 09:03:17 2024 00:21:12.661 read: IOPS=331, BW=83.0MiB/s (87.0MB/s)(843MiB/10155msec) 00:21:12.661 slat (usec): min=15, max=97915, avg=2933.49, stdev=7177.88 00:21:12.661 clat (msec): min=43, max=323, avg=189.64, stdev=25.03 00:21:12.661 lat (msec): min=44, max=323, avg=192.57, stdev=25.59 00:21:12.661 clat percentiles (msec): 00:21:12.661 | 1.00th=[ 127], 5.00th=[ 167], 10.00th=[ 171], 20.00th=[ 178], 00:21:12.661 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 186], 60.00th=[ 190], 00:21:12.661 | 70.00th=[ 192], 80.00th=[ 201], 90.00th=[ 226], 95.00th=[ 234], 00:21:12.661 | 99.00th=[ 259], 99.50th=[ 284], 99.90th=[ 326], 99.95th=[ 326], 00:21:12.661 | 99.99th=[ 326] 00:21:12.661 bw ( KiB/s): min=68096, max=92160, per=5.45%, avg=84617.50, stdev=6817.07, samples=20 00:21:12.661 iops : min= 266, max= 360, avg=330.50, stdev=26.66, samples=20 00:21:12.661 lat (msec) : 50=0.59%, 250=98.07%, 500=1.34% 00:21:12.661 cpu : usr=0.19%, sys=1.35%, ctx=862, majf=0, minf=4097 00:21:12.661 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.1% 00:21:12.661 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:12.661 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:12.661 issued rwts: total=3370,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:12.661 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:12.661 job5: (groupid=0, jobs=1): err= 0: pid=78161: Thu Jul 25 09:03:17 2024 00:21:12.661 read: IOPS=444, BW=111MiB/s (116MB/s)(1125MiB/10128msec) 00:21:12.661 slat (usec): min=17, max=115415, avg=2216.62, stdev=5580.09 00:21:12.661 clat (msec): min=11, max=274, avg=141.57, stdev=38.40 00:21:12.661 lat (msec): min=11, max=274, avg=143.79, stdev=39.03 00:21:12.661 clat percentiles (msec): 00:21:12.661 | 1.00th=[ 37], 5.00th=[ 43], 10.00th=[ 48], 20.00th=[ 142], 00:21:12.661 | 30.00th=[ 148], 40.00th=[ 150], 50.00th=[ 153], 60.00th=[ 155], 00:21:12.661 | 70.00th=[ 157], 80.00th=[ 161], 90.00th=[ 167], 95.00th=[ 174], 00:21:12.661 | 99.00th=[ 188], 99.50th=[ 218], 99.90th=[ 271], 99.95th=[ 271], 00:21:12.661 | 99.99th=[ 275] 00:21:12.661 bw ( KiB/s): min=96256, max=276521, per=7.31%, avg=113620.00, stdev=38511.76, samples=20 00:21:12.661 iops : min= 376, max= 1080, avg=443.80, stdev=150.41, samples=20 00:21:12.661 lat (msec) : 20=0.29%, 50=10.98%, 100=0.60%, 250=87.91%, 500=0.22% 00:21:12.661 cpu : usr=0.15%, sys=1.84%, ctx=1092, majf=0, minf=4097 00:21:12.661 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:21:12.661 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:12.661 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:12.661 issued rwts: total=4501,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:12.661 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:12.661 job6: (groupid=0, jobs=1): err= 0: pid=78162: Thu Jul 25 09:03:17 2024 00:21:12.661 read: IOPS=985, BW=246MiB/s (258MB/s)(2467MiB/10016msec) 00:21:12.661 slat (usec): min=16, max=19971, avg=1008.87, stdev=2287.75 00:21:12.661 clat (usec): min=15080, max=95857, avg=63875.53, stdev=17230.29 00:21:12.661 lat (usec): min=18240, max=95966, avg=64884.39, stdev=17478.84 00:21:12.661 clat percentiles (usec): 00:21:12.661 | 1.00th=[35390], 5.00th=[38011], 10.00th=[39060], 20.00th=[40633], 00:21:12.661 | 30.00th=[43779], 40.00th=[69731], 50.00th=[72877], 60.00th=[74974], 00:21:12.661 | 70.00th=[76022], 80.00th=[78119], 90.00th=[80217], 95.00th=[82314], 00:21:12.661 | 99.00th=[86508], 99.50th=[88605], 99.90th=[91751], 99.95th=[91751], 00:21:12.661 | 99.99th=[95945] 00:21:12.661 bw ( KiB/s): min=203776, max=403456, per=16.16%, avg=251074.00, stdev=73132.47, samples=20 00:21:12.661 iops : min= 796, max= 1576, avg=980.60, stdev=285.77, samples=20 00:21:12.661 lat (msec) : 20=0.04%, 50=33.27%, 100=66.69% 00:21:12.661 cpu : usr=0.40%, sys=3.55%, ctx=2102, majf=0, minf=4097 00:21:12.661 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:21:12.661 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:12.661 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:12.661 issued rwts: total=9869,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:12.661 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:12.661 job7: (groupid=0, jobs=1): err= 0: pid=78163: Thu Jul 25 09:03:17 2024 00:21:12.661 read: IOPS=325, BW=81.3MiB/s (85.2MB/s)(825MiB/10153msec) 00:21:12.661 slat (usec): min=19, max=136872, avg=3053.88, stdev=7788.55 00:21:12.661 clat (msec): min=54, max=334, avg=193.54, stdev=30.25 00:21:12.661 lat (msec): min=62, max=354, avg=196.59, stdev=30.73 00:21:12.661 clat percentiles (msec): 00:21:12.661 | 1.00th=[ 85], 5.00th=[ 169], 10.00th=[ 174], 20.00th=[ 178], 00:21:12.661 | 30.00th=[ 182], 40.00th=[ 184], 50.00th=[ 186], 60.00th=[ 190], 00:21:12.661 | 70.00th=[ 194], 80.00th=[ 205], 90.00th=[ 239], 95.00th=[ 262], 00:21:12.661 | 99.00th=[ 279], 99.50th=[ 288], 99.90th=[ 305], 99.95th=[ 334], 00:21:12.661 | 99.99th=[ 334] 00:21:12.661 bw ( KiB/s): min=66048, max=93184, per=5.33%, avg=82833.05, stdev=8701.49, samples=20 00:21:12.661 iops : min= 258, max= 364, avg=323.55, stdev=33.99, samples=20 00:21:12.661 lat (msec) : 100=1.33%, 250=90.88%, 500=7.79% 00:21:12.661 cpu : usr=0.15%, sys=1.57%, ctx=809, majf=0, minf=4097 00:21:12.661 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:21:12.661 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:12.661 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:12.661 issued rwts: total=3300,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:12.661 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:12.661 job8: (groupid=0, jobs=1): err= 0: pid=78164: Thu Jul 25 09:03:17 2024 00:21:12.661 read: IOPS=324, BW=81.1MiB/s (85.0MB/s)(823MiB/10150msec) 00:21:12.661 slat (usec): min=16, max=165000, avg=3046.31, stdev=7787.53 00:21:12.661 clat (msec): min=104, max=343, avg=193.98, stdev=24.54 00:21:12.661 lat (msec): min=136, max=414, avg=197.03, stdev=25.23 00:21:12.661 clat percentiles (msec): 00:21:12.662 | 1.00th=[ 163], 5.00th=[ 171], 10.00th=[ 176], 20.00th=[ 178], 00:21:12.662 | 30.00th=[ 182], 40.00th=[ 184], 50.00th=[ 188], 60.00th=[ 190], 00:21:12.662 | 70.00th=[ 194], 80.00th=[ 205], 90.00th=[ 226], 95.00th=[ 247], 00:21:12.662 | 99.00th=[ 284], 99.50th=[ 288], 99.90th=[ 326], 99.95th=[ 334], 00:21:12.662 | 99.99th=[ 342] 00:21:12.662 bw ( KiB/s): min=64000, max=92160, per=5.32%, avg=82660.35, stdev=9213.42, samples=20 00:21:12.662 iops : min= 250, max= 360, avg=322.85, stdev=36.04, samples=20 00:21:12.662 lat (msec) : 250=95.26%, 500=4.74% 00:21:12.662 cpu : usr=0.16%, sys=1.10%, ctx=856, majf=0, minf=4097 00:21:12.662 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:21:12.662 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:12.662 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:12.662 issued rwts: total=3293,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:12.662 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:12.662 job9: (groupid=0, jobs=1): err= 0: pid=78165: Thu Jul 25 09:03:17 2024 00:21:12.662 read: IOPS=1442, BW=361MiB/s (378MB/s)(3610MiB/10010msec) 00:21:12.662 slat (usec): min=16, max=152558, avg=680.12, stdev=2228.13 00:21:12.662 clat (usec): min=899, max=370328, avg=43626.71, stdev=18504.83 00:21:12.662 lat (usec): min=967, max=370479, avg=44306.83, stdev=18751.90 00:21:12.662 clat percentiles (msec): 00:21:12.662 | 1.00th=[ 21], 5.00th=[ 39], 10.00th=[ 40], 20.00th=[ 41], 00:21:12.662 | 30.00th=[ 41], 40.00th=[ 42], 50.00th=[ 42], 60.00th=[ 43], 00:21:12.662 | 70.00th=[ 44], 80.00th=[ 45], 90.00th=[ 46], 95.00th=[ 49], 00:21:12.662 | 99.00th=[ 61], 99.50th=[ 226], 99.90th=[ 275], 99.95th=[ 275], 00:21:12.662 | 99.99th=[ 284] 00:21:12.662 bw ( KiB/s): min=117760, max=394752, per=23.69%, avg=368115.10, stdev=60198.68, samples=20 00:21:12.662 iops : min= 460, max= 1542, avg=1437.85, stdev=235.13, samples=20 00:21:12.662 lat (usec) : 1000=0.01% 00:21:12.662 lat (msec) : 2=0.05%, 4=0.03%, 10=0.14%, 20=0.74%, 50=95.08% 00:21:12.662 lat (msec) : 100=3.08%, 250=0.64%, 500=0.24% 00:21:12.662 cpu : usr=0.81%, sys=4.17%, ctx=3023, majf=0, minf=4097 00:21:12.662 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:21:12.662 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:12.662 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:12.662 issued rwts: total=14441,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:12.662 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:12.662 job10: (groupid=0, jobs=1): err= 0: pid=78166: Thu Jul 25 09:03:17 2024 00:21:12.662 read: IOPS=322, BW=80.7MiB/s (84.6MB/s)(819MiB/10152msec) 00:21:12.662 slat (usec): min=17, max=163579, avg=3062.69, stdev=7698.88 00:21:12.662 clat (msec): min=61, max=335, avg=194.95, stdev=24.89 00:21:12.662 lat (msec): min=62, max=369, avg=198.01, stdev=25.38 00:21:12.662 clat percentiles (msec): 00:21:12.662 | 1.00th=[ 167], 5.00th=[ 171], 10.00th=[ 176], 20.00th=[ 180], 00:21:12.662 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 188], 60.00th=[ 192], 00:21:12.662 | 70.00th=[ 197], 80.00th=[ 209], 90.00th=[ 230], 95.00th=[ 247], 00:21:12.662 | 99.00th=[ 284], 99.50th=[ 288], 99.90th=[ 317], 99.95th=[ 338], 00:21:12.662 | 99.99th=[ 338] 00:21:12.662 bw ( KiB/s): min=43607, max=91648, per=5.29%, avg=82215.70, stdev=11095.82, samples=20 00:21:12.662 iops : min= 170, max= 358, avg=321.10, stdev=43.42, samples=20 00:21:12.662 lat (msec) : 100=0.24%, 250=95.33%, 500=4.43% 00:21:12.662 cpu : usr=0.19%, sys=1.35%, ctx=812, majf=0, minf=4097 00:21:12.662 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:21:12.662 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:12.662 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:12.662 issued rwts: total=3276,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:12.662 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:12.662 00:21:12.662 Run status group 0 (all jobs): 00:21:12.662 READ: bw=1517MiB/s (1591MB/s), 80.7MiB/s-361MiB/s (84.6MB/s-378MB/s), io=15.0GiB (16.2GB), run=10010-10155msec 00:21:12.662 00:21:12.662 Disk stats (read/write): 00:21:12.662 nvme0n1: ios=8203/0, merge=0/0, ticks=1225204/0, in_queue=1225204, util=97.83% 00:21:12.662 nvme10n1: ios=14286/0, merge=0/0, ticks=1241931/0, in_queue=1241931, util=98.00% 00:21:12.662 nvme1n1: ios=8184/0, merge=0/0, ticks=1226326/0, in_queue=1226326, util=98.16% 00:21:12.662 nvme2n1: ios=8086/0, merge=0/0, ticks=1226802/0, in_queue=1226802, util=98.30% 00:21:12.662 nvme3n1: ios=6613/0, merge=0/0, ticks=1226301/0, in_queue=1226301, util=98.34% 00:21:12.662 nvme4n1: ios=8888/0, merge=0/0, ticks=1229884/0, in_queue=1229884, util=98.55% 00:21:12.662 nvme5n1: ios=19645/0, merge=0/0, ticks=1241414/0, in_queue=1241414, util=98.58% 00:21:12.662 nvme6n1: ios=6478/0, merge=0/0, ticks=1224421/0, in_queue=1224421, util=98.69% 00:21:12.662 nvme7n1: ios=6461/0, merge=0/0, ticks=1224175/0, in_queue=1224175, util=98.99% 00:21:12.662 nvme8n1: ios=28851/0, merge=0/0, ticks=1245326/0, in_queue=1245326, util=99.04% 00:21:12.662 nvme9n1: ios=6431/0, merge=0/0, ticks=1223161/0, in_queue=1223161, util=99.12% 00:21:12.662 09:03:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:21:12.662 [global] 00:21:12.662 thread=1 00:21:12.662 invalidate=1 00:21:12.662 rw=randwrite 00:21:12.662 time_based=1 00:21:12.662 runtime=10 00:21:12.662 ioengine=libaio 00:21:12.662 direct=1 00:21:12.662 bs=262144 00:21:12.662 iodepth=64 00:21:12.662 norandommap=1 00:21:12.662 numjobs=1 00:21:12.662 00:21:12.662 [job0] 00:21:12.662 filename=/dev/nvme0n1 00:21:12.662 [job1] 00:21:12.662 filename=/dev/nvme10n1 00:21:12.662 [job2] 00:21:12.662 filename=/dev/nvme1n1 00:21:12.662 [job3] 00:21:12.662 filename=/dev/nvme2n1 00:21:12.662 [job4] 00:21:12.662 filename=/dev/nvme3n1 00:21:12.662 [job5] 00:21:12.662 filename=/dev/nvme4n1 00:21:12.662 [job6] 00:21:12.662 filename=/dev/nvme5n1 00:21:12.662 [job7] 00:21:12.662 filename=/dev/nvme6n1 00:21:12.662 [job8] 00:21:12.662 filename=/dev/nvme7n1 00:21:12.662 [job9] 00:21:12.662 filename=/dev/nvme8n1 00:21:12.662 [job10] 00:21:12.662 filename=/dev/nvme9n1 00:21:12.662 Could not set queue depth (nvme0n1) 00:21:12.662 Could not set queue depth (nvme10n1) 00:21:12.662 Could not set queue depth (nvme1n1) 00:21:12.662 Could not set queue depth (nvme2n1) 00:21:12.662 Could not set queue depth (nvme3n1) 00:21:12.662 Could not set queue depth (nvme4n1) 00:21:12.662 Could not set queue depth (nvme5n1) 00:21:12.662 Could not set queue depth (nvme6n1) 00:21:12.662 Could not set queue depth (nvme7n1) 00:21:12.662 Could not set queue depth (nvme8n1) 00:21:12.662 Could not set queue depth (nvme9n1) 00:21:12.662 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:12.662 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:12.662 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:12.662 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:12.662 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:12.662 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:12.662 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:12.662 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:12.662 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:12.662 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:12.662 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:12.662 fio-3.35 00:21:12.662 Starting 11 threads 00:21:22.635 00:21:22.635 job0: (groupid=0, jobs=1): err= 0: pid=78369: Thu Jul 25 09:03:28 2024 00:21:22.635 write: IOPS=264, BW=66.1MiB/s (69.3MB/s)(674MiB/10208msec); 0 zone resets 00:21:22.635 slat (usec): min=24, max=83117, avg=3703.82, stdev=6702.67 00:21:22.635 clat (msec): min=30, max=436, avg=238.38, stdev=39.03 00:21:22.635 lat (msec): min=30, max=436, avg=242.09, stdev=39.07 00:21:22.635 clat percentiles (msec): 00:21:22.635 | 1.00th=[ 73], 5.00th=[ 215], 10.00th=[ 218], 20.00th=[ 220], 00:21:22.635 | 30.00th=[ 230], 40.00th=[ 230], 50.00th=[ 232], 60.00th=[ 232], 00:21:22.635 | 70.00th=[ 234], 80.00th=[ 247], 90.00th=[ 296], 95.00th=[ 309], 00:21:22.635 | 99.00th=[ 334], 99.50th=[ 393], 99.90th=[ 422], 99.95th=[ 439], 00:21:22.635 | 99.99th=[ 439] 00:21:22.635 bw ( KiB/s): min=51200, max=71680, per=5.66%, avg=67423.20, stdev=6411.69, samples=20 00:21:22.635 iops : min= 200, max= 280, avg=263.35, stdev=25.03, samples=20 00:21:22.635 lat (msec) : 50=0.59%, 100=0.89%, 250=79.38%, 500=19.13% 00:21:22.635 cpu : usr=0.65%, sys=0.73%, ctx=3357, majf=0, minf=1 00:21:22.635 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:21:22.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:22.635 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:22.635 issued rwts: total=0,2697,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:22.635 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:22.635 job1: (groupid=0, jobs=1): err= 0: pid=78370: Thu Jul 25 09:03:28 2024 00:21:22.635 write: IOPS=650, BW=163MiB/s (170MB/s)(1641MiB/10098msec); 0 zone resets 00:21:22.635 slat (usec): min=18, max=10325, avg=1519.06, stdev=2614.85 00:21:22.635 clat (msec): min=13, max=202, avg=96.91, stdev=18.15 00:21:22.635 lat (msec): min=13, max=202, avg=98.42, stdev=18.26 00:21:22.635 clat percentiles (msec): 00:21:22.635 | 1.00th=[ 67], 5.00th=[ 69], 10.00th=[ 71], 20.00th=[ 73], 00:21:22.635 | 30.00th=[ 101], 40.00th=[ 103], 50.00th=[ 107], 60.00th=[ 108], 00:21:22.635 | 70.00th=[ 109], 80.00th=[ 109], 90.00th=[ 111], 95.00th=[ 112], 00:21:22.635 | 99.00th=[ 120], 99.50th=[ 148], 99.90th=[ 188], 99.95th=[ 197], 00:21:22.635 | 99.99th=[ 203] 00:21:22.635 bw ( KiB/s): min=148480, max=226816, per=13.98%, avg=166448.00, stdev=30100.61, samples=20 00:21:22.635 iops : min= 580, max= 886, avg=650.15, stdev=117.50, samples=20 00:21:22.635 lat (msec) : 20=0.18%, 50=0.44%, 100=28.99%, 250=70.38% 00:21:22.635 cpu : usr=1.17%, sys=1.82%, ctx=8338, majf=0, minf=1 00:21:22.635 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:21:22.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:22.635 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:22.635 issued rwts: total=0,6564,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:22.635 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:22.635 job2: (groupid=0, jobs=1): err= 0: pid=78382: Thu Jul 25 09:03:28 2024 00:21:22.635 write: IOPS=542, BW=136MiB/s (142MB/s)(1369MiB/10099msec); 0 zone resets 00:21:22.635 slat (usec): min=18, max=58267, avg=1787.01, stdev=3292.59 00:21:22.635 clat (msec): min=10, max=248, avg=116.19, stdev=29.63 00:21:22.635 lat (msec): min=12, max=248, avg=117.98, stdev=29.94 00:21:22.635 clat percentiles (msec): 00:21:22.635 | 1.00th=[ 42], 5.00th=[ 102], 10.00th=[ 102], 20.00th=[ 104], 00:21:22.635 | 30.00th=[ 108], 40.00th=[ 108], 50.00th=[ 108], 60.00th=[ 109], 00:21:22.635 | 70.00th=[ 110], 80.00th=[ 111], 90.00th=[ 163], 95.00th=[ 194], 00:21:22.635 | 99.00th=[ 218], 99.50th=[ 220], 99.90th=[ 228], 99.95th=[ 230], 00:21:22.635 | 99.99th=[ 249] 00:21:22.635 bw ( KiB/s): min=74388, max=154624, per=11.64%, avg=138580.20, stdev=26424.25, samples=20 00:21:22.635 iops : min= 290, max= 604, avg=541.30, stdev=103.29, samples=20 00:21:22.635 lat (msec) : 20=0.24%, 50=1.06%, 100=1.99%, 250=96.71% 00:21:22.635 cpu : usr=1.12%, sys=1.46%, ctx=8052, majf=0, minf=1 00:21:22.635 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:21:22.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:22.635 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:22.635 issued rwts: total=0,5476,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:22.635 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:22.635 job3: (groupid=0, jobs=1): err= 0: pid=78383: Thu Jul 25 09:03:28 2024 00:21:22.635 write: IOPS=897, BW=224MiB/s (235MB/s)(2256MiB/10060msec); 0 zone resets 00:21:22.635 slat (usec): min=18, max=47856, avg=1103.29, stdev=1967.43 00:21:22.635 clat (msec): min=52, max=147, avg=70.21, stdev=16.45 00:21:22.635 lat (msec): min=52, max=147, avg=71.31, stdev=16.61 00:21:22.635 clat percentiles (msec): 00:21:22.635 | 1.00th=[ 61], 5.00th=[ 61], 10.00th=[ 61], 20.00th=[ 62], 00:21:22.635 | 30.00th=[ 64], 40.00th=[ 65], 50.00th=[ 65], 60.00th=[ 65], 00:21:22.635 | 70.00th=[ 66], 80.00th=[ 66], 90.00th=[ 109], 95.00th=[ 113], 00:21:22.635 | 99.00th=[ 116], 99.50th=[ 116], 99.90th=[ 132], 99.95th=[ 140], 00:21:22.635 | 99.99th=[ 148] 00:21:22.635 bw ( KiB/s): min=133120, max=257536, per=19.27%, avg=229401.70, stdev=46003.79, samples=20 00:21:22.636 iops : min= 520, max= 1006, avg=896.10, stdev=179.70, samples=20 00:21:22.636 lat (msec) : 100=86.81%, 250=13.19% 00:21:22.636 cpu : usr=1.53%, sys=2.38%, ctx=10863, majf=0, minf=1 00:21:22.636 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:21:22.636 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:22.636 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:22.636 issued rwts: total=0,9025,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:22.636 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:22.636 job4: (groupid=0, jobs=1): err= 0: pid=78384: Thu Jul 25 09:03:28 2024 00:21:22.636 write: IOPS=263, BW=65.8MiB/s (69.0MB/s)(671MiB/10198msec); 0 zone resets 00:21:22.636 slat (usec): min=24, max=147972, avg=3722.14, stdev=6977.53 00:21:22.636 clat (msec): min=155, max=427, avg=239.38, stdev=26.78 00:21:22.636 lat (msec): min=155, max=427, avg=243.10, stdev=26.30 00:21:22.636 clat percentiles (msec): 00:21:22.636 | 1.00th=[ 211], 5.00th=[ 215], 10.00th=[ 218], 20.00th=[ 224], 00:21:22.636 | 30.00th=[ 230], 40.00th=[ 230], 50.00th=[ 232], 60.00th=[ 232], 00:21:22.636 | 70.00th=[ 234], 80.00th=[ 253], 90.00th=[ 279], 95.00th=[ 288], 00:21:22.636 | 99.00th=[ 355], 99.50th=[ 384], 99.90th=[ 414], 99.95th=[ 426], 00:21:22.636 | 99.99th=[ 426] 00:21:22.636 bw ( KiB/s): min=47616, max=72192, per=5.63%, avg=67072.00, stdev=6694.24, samples=20 00:21:22.636 iops : min= 186, max= 282, avg=262.00, stdev=26.15, samples=20 00:21:22.636 lat (msec) : 250=78.61%, 500=21.39% 00:21:22.636 cpu : usr=0.85%, sys=0.71%, ctx=2860, majf=0, minf=1 00:21:22.636 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:21:22.636 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:22.636 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:22.636 issued rwts: total=0,2683,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:22.636 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:22.636 job5: (groupid=0, jobs=1): err= 0: pid=78385: Thu Jul 25 09:03:28 2024 00:21:22.636 write: IOPS=408, BW=102MiB/s (107MB/s)(1034MiB/10124msec); 0 zone resets 00:21:22.636 slat (usec): min=17, max=53226, avg=2414.20, stdev=4309.82 00:21:22.636 clat (msec): min=45, max=269, avg=154.22, stdev=22.01 00:21:22.636 lat (msec): min=45, max=269, avg=156.63, stdev=21.91 00:21:22.636 clat percentiles (msec): 00:21:22.636 | 1.00th=[ 138], 5.00th=[ 140], 10.00th=[ 140], 20.00th=[ 142], 00:21:22.636 | 30.00th=[ 148], 40.00th=[ 148], 50.00th=[ 148], 60.00th=[ 150], 00:21:22.636 | 70.00th=[ 150], 80.00th=[ 155], 90.00th=[ 174], 95.00th=[ 211], 00:21:22.636 | 99.00th=[ 251], 99.50th=[ 255], 99.90th=[ 262], 99.95th=[ 262], 00:21:22.636 | 99.99th=[ 271] 00:21:22.636 bw ( KiB/s): min=68608, max=111104, per=8.76%, avg=104243.20, stdev=12348.93, samples=20 00:21:22.636 iops : min= 268, max= 434, avg=407.20, stdev=48.24, samples=20 00:21:22.636 lat (msec) : 50=0.05%, 100=0.39%, 250=98.45%, 500=1.11% 00:21:22.636 cpu : usr=0.76%, sys=1.17%, ctx=5281, majf=0, minf=1 00:21:22.636 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:21:22.636 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:22.636 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:22.636 issued rwts: total=0,4135,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:22.636 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:22.636 job6: (groupid=0, jobs=1): err= 0: pid=78386: Thu Jul 25 09:03:28 2024 00:21:22.636 write: IOPS=270, BW=67.6MiB/s (70.8MB/s)(690MiB/10216msec); 0 zone resets 00:21:22.636 slat (usec): min=16, max=39677, avg=3547.31, stdev=6408.26 00:21:22.636 clat (msec): min=31, max=435, avg=233.15, stdev=37.25 00:21:22.636 lat (msec): min=31, max=435, avg=236.69, stdev=37.36 00:21:22.636 clat percentiles (msec): 00:21:22.636 | 1.00th=[ 97], 5.00th=[ 178], 10.00th=[ 209], 20.00th=[ 218], 00:21:22.636 | 30.00th=[ 230], 40.00th=[ 230], 50.00th=[ 232], 60.00th=[ 232], 00:21:22.636 | 70.00th=[ 243], 80.00th=[ 247], 90.00th=[ 279], 95.00th=[ 292], 00:21:22.636 | 99.00th=[ 334], 99.50th=[ 393], 99.90th=[ 422], 99.95th=[ 435], 00:21:22.636 | 99.99th=[ 435] 00:21:22.636 bw ( KiB/s): min=57344, max=83968, per=5.80%, avg=69043.20, stdev=6350.01, samples=20 00:21:22.636 iops : min= 224, max= 328, avg=269.70, stdev=24.80, samples=20 00:21:22.636 lat (msec) : 50=0.29%, 100=0.83%, 250=83.70%, 500=15.18% 00:21:22.636 cpu : usr=0.53%, sys=0.85%, ctx=3328, majf=0, minf=1 00:21:22.636 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:21:22.636 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:22.636 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:22.636 issued rwts: total=0,2761,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:22.636 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:22.636 job7: (groupid=0, jobs=1): err= 0: pid=78387: Thu Jul 25 09:03:28 2024 00:21:22.636 write: IOPS=410, BW=103MiB/s (108MB/s)(1039MiB/10131msec); 0 zone resets 00:21:22.636 slat (usec): min=18, max=67197, avg=2400.35, stdev=4272.64 00:21:22.636 clat (msec): min=26, max=280, avg=153.56, stdev=21.14 00:21:22.636 lat (msec): min=26, max=280, avg=155.96, stdev=21.02 00:21:22.636 clat percentiles (msec): 00:21:22.636 | 1.00th=[ 136], 5.00th=[ 140], 10.00th=[ 140], 20.00th=[ 144], 00:21:22.636 | 30.00th=[ 148], 40.00th=[ 148], 50.00th=[ 148], 60.00th=[ 150], 00:21:22.636 | 70.00th=[ 150], 80.00th=[ 157], 90.00th=[ 176], 95.00th=[ 201], 00:21:22.636 | 99.00th=[ 239], 99.50th=[ 243], 99.90th=[ 271], 99.95th=[ 271], 00:21:22.636 | 99.99th=[ 279] 00:21:22.636 bw ( KiB/s): min=75776, max=112415, per=8.80%, avg=104769.55, stdev=10529.13, samples=20 00:21:22.636 iops : min= 296, max= 439, avg=409.25, stdev=41.12, samples=20 00:21:22.636 lat (msec) : 50=0.29%, 100=0.38%, 250=98.99%, 500=0.34% 00:21:22.636 cpu : usr=0.86%, sys=1.28%, ctx=3675, majf=0, minf=1 00:21:22.636 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:21:22.636 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:22.636 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:22.636 issued rwts: total=0,4156,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:22.636 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:22.636 job8: (groupid=0, jobs=1): err= 0: pid=78388: Thu Jul 25 09:03:28 2024 00:21:22.636 write: IOPS=263, BW=65.8MiB/s (69.0MB/s)(672MiB/10202msec); 0 zone resets 00:21:22.636 slat (usec): min=22, max=56273, avg=3716.20, stdev=6627.05 00:21:22.636 clat (msec): min=43, max=432, avg=239.15, stdev=37.88 00:21:22.636 lat (msec): min=43, max=432, avg=242.86, stdev=37.90 00:21:22.636 clat percentiles (msec): 00:21:22.636 | 1.00th=[ 93], 5.00th=[ 215], 10.00th=[ 218], 20.00th=[ 220], 00:21:22.636 | 30.00th=[ 230], 40.00th=[ 230], 50.00th=[ 232], 60.00th=[ 232], 00:21:22.636 | 70.00th=[ 234], 80.00th=[ 247], 90.00th=[ 300], 95.00th=[ 321], 00:21:22.636 | 99.00th=[ 330], 99.50th=[ 388], 99.90th=[ 418], 99.95th=[ 435], 00:21:22.636 | 99.99th=[ 435] 00:21:22.636 bw ( KiB/s): min=51200, max=71680, per=5.64%, avg=67174.40, stdev=6775.36, samples=20 00:21:22.636 iops : min= 200, max= 280, avg=262.40, stdev=26.47, samples=20 00:21:22.636 lat (msec) : 50=0.11%, 100=0.89%, 250=80.91%, 500=18.09% 00:21:22.636 cpu : usr=0.84%, sys=0.64%, ctx=3003, majf=0, minf=1 00:21:22.636 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:21:22.636 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:22.636 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:22.636 issued rwts: total=0,2687,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:22.636 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:22.636 job9: (groupid=0, jobs=1): err= 0: pid=78389: Thu Jul 25 09:03:28 2024 00:21:22.636 write: IOPS=458, BW=115MiB/s (120MB/s)(1161MiB/10131msec); 0 zone resets 00:21:22.636 slat (usec): min=17, max=18578, avg=2126.65, stdev=3711.40 00:21:22.636 clat (msec): min=15, max=278, avg=137.44, stdev=21.97 00:21:22.636 lat (msec): min=15, max=278, avg=139.57, stdev=22.00 00:21:22.636 clat percentiles (msec): 00:21:22.636 | 1.00th=[ 65], 5.00th=[ 107], 10.00th=[ 111], 20.00th=[ 114], 00:21:22.636 | 30.00th=[ 140], 40.00th=[ 140], 50.00th=[ 148], 60.00th=[ 148], 00:21:22.636 | 70.00th=[ 148], 80.00th=[ 150], 90.00th=[ 150], 95.00th=[ 153], 00:21:22.636 | 99.00th=[ 190], 99.50th=[ 224], 99.90th=[ 271], 99.95th=[ 271], 00:21:22.636 | 99.99th=[ 279] 00:21:22.636 bw ( KiB/s): min=107008, max=145920, per=9.85%, avg=117273.60, stdev=14127.05, samples=20 00:21:22.636 iops : min= 418, max= 570, avg=458.10, stdev=55.18, samples=20 00:21:22.636 lat (msec) : 20=0.09%, 50=0.60%, 100=1.10%, 250=97.91%, 500=0.30% 00:21:22.636 cpu : usr=0.97%, sys=1.18%, ctx=5832, majf=0, minf=1 00:21:22.636 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:21:22.636 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:22.636 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:22.636 issued rwts: total=0,4644,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:22.636 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:22.636 job10: (groupid=0, jobs=1): err= 0: pid=78390: Thu Jul 25 09:03:28 2024 00:21:22.636 write: IOPS=262, BW=65.6MiB/s (68.8MB/s)(669MiB/10201msec); 0 zone resets 00:21:22.636 slat (usec): min=17, max=103896, avg=3733.04, stdev=6781.90 00:21:22.636 clat (msec): min=106, max=426, avg=240.03, stdev=30.47 00:21:22.636 lat (msec): min=106, max=426, avg=243.77, stdev=30.24 00:21:22.636 clat percentiles (msec): 00:21:22.636 | 1.00th=[ 180], 5.00th=[ 215], 10.00th=[ 218], 20.00th=[ 222], 00:21:22.636 | 30.00th=[ 230], 40.00th=[ 230], 50.00th=[ 232], 60.00th=[ 232], 00:21:22.636 | 70.00th=[ 234], 80.00th=[ 251], 90.00th=[ 288], 95.00th=[ 309], 00:21:22.637 | 99.00th=[ 326], 99.50th=[ 384], 99.90th=[ 414], 99.95th=[ 426], 00:21:22.637 | 99.99th=[ 426] 00:21:22.637 bw ( KiB/s): min=53248, max=71680, per=5.62%, avg=66918.40, stdev=6418.78, samples=20 00:21:22.637 iops : min= 208, max= 280, avg=261.40, stdev=25.07, samples=20 00:21:22.637 lat (msec) : 250=80.20%, 500=19.80% 00:21:22.637 cpu : usr=0.63%, sys=0.69%, ctx=4388, majf=0, minf=1 00:21:22.637 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:21:22.637 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:22.637 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:22.637 issued rwts: total=0,2677,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:22.637 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:22.637 00:21:22.637 Run status group 0 (all jobs): 00:21:22.637 WRITE: bw=1163MiB/s (1219MB/s), 65.6MiB/s-224MiB/s (68.8MB/s-235MB/s), io=11.6GiB (12.5GB), run=10060-10216msec 00:21:22.637 00:21:22.637 Disk stats (read/write): 00:21:22.637 nvme0n1: ios=50/5263, merge=0/0, ticks=57/1207426, in_queue=1207483, util=97.87% 00:21:22.637 nvme10n1: ios=49/12993, merge=0/0, ticks=99/1215222, in_queue=1215321, util=98.17% 00:21:22.637 nvme1n1: ios=46/10816, merge=0/0, ticks=74/1216071, in_queue=1216145, util=98.24% 00:21:22.637 nvme2n1: ios=32/17859, merge=0/0, ticks=55/1214090, in_queue=1214145, util=98.06% 00:21:22.637 nvme3n1: ios=27/5227, merge=0/0, ticks=55/1206329, in_queue=1206384, util=98.00% 00:21:22.637 nvme4n1: ios=0/8113, merge=0/0, ticks=0/1209443, in_queue=1209443, util=98.00% 00:21:22.637 nvme5n1: ios=0/5390, merge=0/0, ticks=0/1209364, in_queue=1209364, util=98.45% 00:21:22.637 nvme6n1: ios=0/8174, merge=0/0, ticks=0/1212163, in_queue=1212163, util=98.39% 00:21:22.637 nvme7n1: ios=0/5240, merge=0/0, ticks=0/1206818, in_queue=1206818, util=98.62% 00:21:22.637 nvme8n1: ios=0/9146, merge=0/0, ticks=0/1211968, in_queue=1211968, util=98.74% 00:21:22.637 nvme9n1: ios=0/5213, merge=0/0, ticks=0/1206080, in_queue=1206080, util=98.79% 00:21:22.637 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:21:22.637 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:21:22.637 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:22.637 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:22.637 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:22.637 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:21:22.637 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:21:22.637 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:22.637 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:21:22.637 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:22.637 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:21:22.637 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:21:22.637 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:22.637 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.637 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:22.637 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.637 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:22.637 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:21:22.637 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:21:22.637 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:21:22.637 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:21:22.637 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:22.637 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:21:22.637 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:22.637 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:21:22.637 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:21:22.637 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:22.637 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.637 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:22.637 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.637 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:22.637 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:21:22.637 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:21:22.637 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:21:22.637 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:21:22.637 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:22.637 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:21:22.637 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:22.637 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:21:22.637 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:21:22.637 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:21:22.637 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.637 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:22.637 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.637 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:22.637 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:21:22.637 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:21:22.637 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:21:22.637 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:21:22.637 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:22.637 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:21:22.637 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:22.637 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:21:22.637 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:21:22.637 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:21:22.637 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.637 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:22.637 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.637 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:22.637 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:21:22.637 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:21:22.637 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:21:22.637 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:21:22.637 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:22.637 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:21:22.637 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:22.637 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:21:22.637 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:21:22.637 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:21:22.637 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.637 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:22.637 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.637 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:22.637 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:21:22.637 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:21:22.638 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:21:22.638 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:21:22.638 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:22.638 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:21:22.638 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:22.638 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:21:22.638 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:21:22.638 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:21:22.638 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.638 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:22.638 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.638 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:22.638 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:21:22.638 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:21:22.638 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:21:22.638 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:21:22.638 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:22.638 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:21:22.638 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:22.638 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:21:22.638 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:21:22.638 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:21:22.638 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.638 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:22.638 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.638 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:22.638 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:21:22.638 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:21:22.638 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:21:22.638 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:21:22.638 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:22.638 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:21:22.638 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:21:22.638 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:22.638 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:21:22.638 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:21:22.638 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.638 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:22.638 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.638 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:22.638 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:21:22.638 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:21:22.638 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:21:22.638 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:21:22.638 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:22.638 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:21:22.638 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:22.638 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:21:22.638 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:21:22.638 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:21:22.638 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.638 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:22.638 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.638 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:22.638 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:21:22.638 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:21:22.638 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:21:22.638 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:21:22.638 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:22.638 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:21:22.638 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:22.638 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:21:22.638 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:21:22.638 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:21:22.638 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.638 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:22.638 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.638 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:22.638 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:21:22.897 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:21:22.897 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:21:22.897 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:21:22.897 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:22.897 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:21:22.897 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:22.897 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:21:22.897 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:21:22.897 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:21:22.897 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.897 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:22.897 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.897 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:21:22.898 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:21:22.898 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:21:22.898 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:22.898 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:21:22.898 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:22.898 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:21:22.898 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:22.898 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:22.898 rmmod nvme_tcp 00:21:22.898 rmmod nvme_fabrics 00:21:22.898 rmmod nvme_keyring 00:21:22.898 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:22.898 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:21:22.898 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:21:22.898 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 77701 ']' 00:21:22.898 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 77701 00:21:22.898 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@950 -- # '[' -z 77701 ']' 00:21:22.898 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # kill -0 77701 00:21:22.898 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # uname 00:21:22.898 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:22.898 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77701 00:21:22.898 killing process with pid 77701 00:21:22.898 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:22.898 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:22.898 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77701' 00:21:22.898 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@969 -- # kill 77701 00:21:22.898 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@974 -- # wait 77701 00:21:26.197 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:26.197 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:26.197 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:26.197 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:26.197 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:26.197 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:26.197 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:26.197 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:26.197 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:26.197 00:21:26.197 real 0m52.118s 00:21:26.197 user 2m56.090s 00:21:26.197 sys 0m29.507s 00:21:26.197 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:26.197 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:26.197 ************************************ 00:21:26.197 END TEST nvmf_multiconnection 00:21:26.197 ************************************ 00:21:26.197 09:03:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:21:26.197 09:03:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:26.197 09:03:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:26.197 09:03:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:26.197 ************************************ 00:21:26.197 START TEST nvmf_initiator_timeout 00:21:26.197 ************************************ 00:21:26.197 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:21:26.197 * Looking for test storage... 00:21:26.197 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:26.197 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:26.197 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:21:26.197 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:26.197 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:26.197 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:26.197 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:26.197 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:26.197 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:26.197 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:26.197 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:26.197 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:26.197 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:26.197 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:21:26.197 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:21:26.197 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:26.197 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:26.197 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:26.197 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:26.197 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:26.197 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:26.197 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:26.197 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:26.197 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.197 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.197 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.197 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:21:26.197 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.197 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:21:26.197 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:26.197 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:26.197 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:26.198 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:26.198 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:26.198 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:26.198 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:26.198 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:26.198 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:26.198 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:26.198 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:21:26.198 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:26.198 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:26.198 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:26.198 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:26.198 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:26.198 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:26.198 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:26.198 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:26.198 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:21:26.198 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:21:26.198 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:21:26.198 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:21:26.198 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:21:26.198 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:21:26.198 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:26.198 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:26.198 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:26.198 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:26.198 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:26.198 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:26.198 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:26.198 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:26.198 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:26.198 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:26.198 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:26.198 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:26.198 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:26.198 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:26.198 Cannot find device "nvmf_tgt_br" 00:21:26.198 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@155 -- # true 00:21:26.198 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:26.198 Cannot find device "nvmf_tgt_br2" 00:21:26.198 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@156 -- # true 00:21:26.198 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:26.198 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:26.198 Cannot find device "nvmf_tgt_br" 00:21:26.198 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@158 -- # true 00:21:26.198 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:26.198 Cannot find device "nvmf_tgt_br2" 00:21:26.198 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@159 -- # true 00:21:26.198 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:26.198 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:26.198 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:26.198 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:26.198 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # true 00:21:26.198 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:26.198 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:26.198 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # true 00:21:26.198 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:26.198 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:26.198 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:26.198 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:26.198 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:26.198 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:26.456 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:26.456 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:26.456 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:26.456 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:26.456 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:26.456 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:26.456 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:26.456 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:26.456 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:26.456 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:26.456 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:26.456 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:26.456 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:26.456 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:26.456 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:26.456 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:26.456 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:26.456 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:26.456 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:26.456 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.112 ms 00:21:26.456 00:21:26.456 --- 10.0.0.2 ping statistics --- 00:21:26.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:26.456 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:21:26.456 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:26.456 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:26.456 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:21:26.456 00:21:26.456 --- 10.0.0.3 ping statistics --- 00:21:26.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:26.456 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:21:26.456 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:26.456 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:26.456 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:21:26.456 00:21:26.456 --- 10.0.0.1 ping statistics --- 00:21:26.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:26.456 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:21:26.456 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:26.456 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@433 -- # return 0 00:21:26.456 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:26.456 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:26.456 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:26.456 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:26.456 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:26.456 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:26.456 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:26.456 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:21:26.456 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:26.456 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:26.456 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:26.456 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=78785 00:21:26.456 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:26.456 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 78785 00:21:26.456 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@831 -- # '[' -z 78785 ']' 00:21:26.456 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:26.456 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:26.456 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:26.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:26.456 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:26.456 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:26.714 [2024-07-25 09:03:33.574631] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:26.714 [2024-07-25 09:03:33.574792] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:26.714 [2024-07-25 09:03:33.802347] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:26.972 [2024-07-25 09:03:34.068895] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:26.973 [2024-07-25 09:03:34.068970] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:26.973 [2024-07-25 09:03:34.068989] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:26.973 [2024-07-25 09:03:34.069005] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:26.973 [2024-07-25 09:03:34.069021] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:26.973 [2024-07-25 09:03:34.069216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:26.973 [2024-07-25 09:03:34.069521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:26.973 [2024-07-25 09:03:34.069972] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:26.973 [2024-07-25 09:03:34.070222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:27.231 [2024-07-25 09:03:34.273376] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:27.489 09:03:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:27.489 09:03:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # return 0 00:21:27.489 09:03:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:27.489 09:03:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:27.489 09:03:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:27.489 09:03:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:27.489 09:03:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:21:27.489 09:03:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:27.489 09:03:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.489 09:03:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:27.747 Malloc0 00:21:27.747 09:03:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.747 09:03:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:21:27.747 09:03:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.747 09:03:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:27.747 Delay0 00:21:27.747 09:03:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.747 09:03:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:27.747 09:03:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.747 09:03:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:27.747 [2024-07-25 09:03:34.681350] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:27.747 09:03:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.747 09:03:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:21:27.747 09:03:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.748 09:03:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:27.748 09:03:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.748 09:03:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:21:27.748 09:03:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.748 09:03:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:27.748 09:03:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.748 09:03:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:27.748 09:03:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.748 09:03:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:27.748 [2024-07-25 09:03:34.713551] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:27.748 09:03:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.748 09:03:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid=a4705431-95c9-4bc1-9185-4a8233d2d7f5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:21:27.748 09:03:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:21:27.748 09:03:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:21:27.748 09:03:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:21:27.748 09:03:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:21:27.748 09:03:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:21:30.319 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:21:30.319 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:21:30.319 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:21:30.319 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:21:30.319 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:21:30.319 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:21:30.319 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=78848 00:21:30.319 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:21:30.319 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:21:30.319 [global] 00:21:30.319 thread=1 00:21:30.319 invalidate=1 00:21:30.319 rw=write 00:21:30.319 time_based=1 00:21:30.319 runtime=60 00:21:30.319 ioengine=libaio 00:21:30.319 direct=1 00:21:30.319 bs=4096 00:21:30.319 iodepth=1 00:21:30.320 norandommap=0 00:21:30.320 numjobs=1 00:21:30.320 00:21:30.320 verify_dump=1 00:21:30.320 verify_backlog=512 00:21:30.320 verify_state_save=0 00:21:30.320 do_verify=1 00:21:30.320 verify=crc32c-intel 00:21:30.320 [job0] 00:21:30.320 filename=/dev/nvme0n1 00:21:30.320 Could not set queue depth (nvme0n1) 00:21:30.320 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:30.320 fio-3.35 00:21:30.320 Starting 1 thread 00:21:32.857 09:03:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:21:32.858 09:03:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.858 09:03:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:32.858 true 00:21:32.858 09:03:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.858 09:03:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:21:32.858 09:03:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.858 09:03:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:32.858 true 00:21:32.858 09:03:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.858 09:03:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:21:32.858 09:03:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.858 09:03:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:32.858 true 00:21:32.858 09:03:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.858 09:03:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:21:32.858 09:03:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.858 09:03:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:32.858 true 00:21:32.858 09:03:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.858 09:03:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:21:36.198 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:21:36.198 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.198 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:36.198 true 00:21:36.198 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.198 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:21:36.198 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.198 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:36.198 true 00:21:36.198 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.198 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:21:36.198 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.198 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:36.198 true 00:21:36.198 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.198 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:21:36.198 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.198 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:36.198 true 00:21:36.199 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.199 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:21:36.199 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 78848 00:22:32.428 00:22:32.428 job0: (groupid=0, jobs=1): err= 0: pid=78870: Thu Jul 25 09:04:37 2024 00:22:32.428 read: IOPS=553, BW=2214KiB/s (2268kB/s)(130MiB/60000msec) 00:22:32.428 slat (usec): min=11, max=143, avg=18.47, stdev= 6.76 00:22:32.428 clat (usec): min=210, max=1032, avg=299.16, stdev=46.14 00:22:32.428 lat (usec): min=224, max=1049, avg=317.63, stdev=49.84 00:22:32.428 clat percentiles (usec): 00:22:32.428 | 1.00th=[ 225], 5.00th=[ 237], 10.00th=[ 253], 20.00th=[ 273], 00:22:32.428 | 30.00th=[ 281], 40.00th=[ 285], 50.00th=[ 293], 60.00th=[ 302], 00:22:32.428 | 70.00th=[ 310], 80.00th=[ 322], 90.00th=[ 343], 95.00th=[ 379], 00:22:32.428 | 99.00th=[ 478], 99.50th=[ 502], 99.90th=[ 611], 99.95th=[ 668], 00:22:32.428 | 99.99th=[ 865] 00:22:32.428 write: IOPS=554, BW=2219KiB/s (2272kB/s)(130MiB/60000msec); 0 zone resets 00:22:32.428 slat (usec): min=16, max=12789, avg=30.74, stdev=83.57 00:22:32.428 clat (usec): min=94, max=40552k, avg=1450.52, stdev=222286.52 00:22:32.428 lat (usec): min=170, max=40552k, avg=1481.26, stdev=222286.56 00:22:32.428 clat percentiles (usec): 00:22:32.428 | 1.00th=[ 161], 5.00th=[ 176], 10.00th=[ 188], 20.00th=[ 202], 00:22:32.428 | 30.00th=[ 212], 40.00th=[ 221], 50.00th=[ 229], 60.00th=[ 239], 00:22:32.428 | 70.00th=[ 247], 80.00th=[ 258], 90.00th=[ 273], 95.00th=[ 293], 00:22:32.428 | 99.00th=[ 371], 99.50th=[ 392], 99.90th=[ 482], 99.95th=[ 553], 00:22:32.428 | 99.99th=[ 922] 00:22:32.428 bw ( KiB/s): min= 1424, max= 9472, per=100.00%, avg=6721.64, stdev=1770.87, samples=39 00:22:32.428 iops : min= 356, max= 2368, avg=1680.41, stdev=442.72, samples=39 00:22:32.428 lat (usec) : 100=0.01%, 250=40.91%, 500=58.79%, 750=0.27%, 1000=0.02% 00:22:32.428 lat (msec) : 2=0.01%, 4=0.01%, >=2000=0.01% 00:22:32.428 cpu : usr=0.57%, sys=2.09%, ctx=66500, majf=0, minf=2 00:22:32.428 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:32.428 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:32.428 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:32.428 issued rwts: total=33216,33280,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:32.428 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:32.428 00:22:32.428 Run status group 0 (all jobs): 00:22:32.428 READ: bw=2214KiB/s (2268kB/s), 2214KiB/s-2214KiB/s (2268kB/s-2268kB/s), io=130MiB (136MB), run=60000-60000msec 00:22:32.428 WRITE: bw=2219KiB/s (2272kB/s), 2219KiB/s-2219KiB/s (2272kB/s-2272kB/s), io=130MiB (136MB), run=60000-60000msec 00:22:32.428 00:22:32.428 Disk stats (read/write): 00:22:32.428 nvme0n1: ios=33122/33280, merge=0/0, ticks=10133/8143, in_queue=18276, util=99.65% 00:22:32.428 09:04:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:22:32.428 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:32.428 09:04:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:22:32.428 09:04:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:22:32.428 09:04:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:22:32.428 09:04:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:22:32.428 09:04:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:22:32.428 09:04:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:22:32.429 nvmf hotplug test: fio successful as expected 00:22:32.429 09:04:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:22:32.429 09:04:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:22:32.429 09:04:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:22:32.429 09:04:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:32.429 09:04:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.429 09:04:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:32.429 09:04:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.429 09:04:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:22:32.429 09:04:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:22:32.429 09:04:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:22:32.429 09:04:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:32.429 09:04:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:22:32.429 09:04:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:32.429 09:04:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:22:32.429 09:04:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:32.429 09:04:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:32.429 rmmod nvme_tcp 00:22:32.429 rmmod nvme_fabrics 00:22:32.429 rmmod nvme_keyring 00:22:32.429 09:04:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:32.429 09:04:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:22:32.429 09:04:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:22:32.429 09:04:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 78785 ']' 00:22:32.429 09:04:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 78785 00:22:32.429 09:04:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@950 -- # '[' -z 78785 ']' 00:22:32.429 09:04:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # kill -0 78785 00:22:32.429 09:04:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # uname 00:22:32.429 09:04:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:32.429 09:04:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78785 00:22:32.429 killing process with pid 78785 00:22:32.429 09:04:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:32.429 09:04:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:32.429 09:04:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78785' 00:22:32.429 09:04:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@969 -- # kill 78785 00:22:32.429 09:04:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@974 -- # wait 78785 00:22:32.429 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:32.429 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:32.429 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:32.429 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:32.429 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:32.429 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:32.429 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:32.429 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:32.429 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:32.429 00:22:32.429 real 1m5.754s 00:22:32.429 user 4m0.545s 00:22:32.429 sys 0m16.693s 00:22:32.429 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:32.429 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:32.429 ************************************ 00:22:32.429 END TEST nvmf_initiator_timeout 00:22:32.429 ************************************ 00:22:32.429 09:04:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@51 -- # [[ virt == phy ]] 00:22:32.429 09:04:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:22:32.429 ************************************ 00:22:32.429 END TEST nvmf_target_extra 00:22:32.429 ************************************ 00:22:32.429 00:22:32.429 real 7m8.303s 00:22:32.429 user 17m33.677s 00:22:32.429 sys 1m48.744s 00:22:32.429 09:04:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:32.429 09:04:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:32.429 09:04:38 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:32.429 09:04:38 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:32.429 09:04:38 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:32.429 09:04:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:32.429 ************************************ 00:22:32.429 START TEST nvmf_host 00:22:32.429 ************************************ 00:22:32.429 09:04:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:32.429 * Looking for test storage... 00:22:32.429 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:22:32.429 09:04:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:32.429 09:04:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:22:32.429 09:04:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:32.429 09:04:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:32.429 09:04:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:32.429 09:04:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:32.429 09:04:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:32.429 09:04:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:32.429 09:04:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:32.429 09:04:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:32.429 09:04:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:32.429 09:04:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:32.429 09:04:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:22:32.429 09:04:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:22:32.429 09:04:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:32.429 09:04:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:32.429 09:04:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:32.429 09:04:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:32.429 09:04:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:32.429 09:04:38 nvmf_tcp.nvmf_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:32.429 09:04:38 nvmf_tcp.nvmf_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:32.429 09:04:38 nvmf_tcp.nvmf_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:32.429 09:04:38 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.429 09:04:38 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.430 09:04:38 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.430 09:04:38 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:22:32.430 09:04:38 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.430 09:04:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # : 0 00:22:32.430 09:04:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:32.430 09:04:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:32.430 09:04:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:32.430 09:04:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:32.430 09:04:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:32.430 09:04:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:32.430 09:04:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:32.430 09:04:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:32.430 09:04:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:22:32.430 09:04:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:22:32.430 09:04:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:22:32.430 09:04:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:32.430 09:04:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:32.430 09:04:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:32.430 09:04:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.430 ************************************ 00:22:32.430 START TEST nvmf_identify 00:22:32.430 ************************************ 00:22:32.430 09:04:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:32.430 * Looking for test storage... 00:22:32.430 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:32.430 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:32.430 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:22:32.430 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:32.430 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:32.430 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:32.430 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:32.430 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:32.430 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:32.430 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:32.430 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:32.430 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:32.430 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:32.430 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:22:32.430 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:22:32.430 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:32.430 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:32.430 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:32.430 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:32.430 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:32.430 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:32.430 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:32.430 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:32.430 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.430 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.430 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.430 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:22:32.430 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.430 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:22:32.430 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:32.430 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:32.430 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:32.430 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:32.430 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:32.430 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:32.430 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:32.430 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:32.430 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:32.430 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:32.430 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:22:32.430 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:32.430 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:32.430 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:32.430 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:32.430 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:32.430 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:32.430 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:32.430 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:32.430 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:22:32.430 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:22:32.430 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:22:32.430 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:22:32.430 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:22:32.430 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # nvmf_veth_init 00:22:32.430 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:32.430 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:32.430 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:32.430 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:32.430 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:32.430 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:32.430 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:32.430 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:32.431 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:32.431 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:32.431 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:32.431 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:32.431 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:32.431 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:32.431 Cannot find device "nvmf_tgt_br" 00:22:32.431 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # true 00:22:32.431 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:32.431 Cannot find device "nvmf_tgt_br2" 00:22:32.431 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # true 00:22:32.431 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:32.431 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:32.431 Cannot find device "nvmf_tgt_br" 00:22:32.431 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # true 00:22:32.431 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:32.431 Cannot find device "nvmf_tgt_br2" 00:22:32.431 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # true 00:22:32.431 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:32.431 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:32.431 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:32.431 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:32.431 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:22:32.431 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:32.431 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:32.431 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:22:32.431 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:32.431 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:32.431 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:32.431 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:32.431 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:32.431 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:32.431 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:32.431 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:32.431 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:32.431 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:32.431 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:32.431 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:32.431 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:32.431 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:32.431 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:32.431 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:32.431 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:32.431 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:32.431 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:32.431 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:32.431 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:32.431 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:32.431 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:32.431 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:32.431 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:32.431 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.135 ms 00:22:32.431 00:22:32.431 --- 10.0.0.2 ping statistics --- 00:22:32.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:32.431 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:22:32.431 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:32.431 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:32.431 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.102 ms 00:22:32.431 00:22:32.431 --- 10.0.0.3 ping statistics --- 00:22:32.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:32.431 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:22:32.431 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:32.431 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:32.431 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.054 ms 00:22:32.431 00:22:32.431 --- 10.0.0.1 ping statistics --- 00:22:32.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:32.431 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:22:32.431 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:32.431 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@433 -- # return 0 00:22:32.431 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:32.431 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:32.431 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:32.431 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:32.431 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:32.431 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:32.431 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:32.431 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:22:32.431 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:32.431 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:32.431 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=79726 00:22:32.431 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:32.431 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:32.431 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 79726 00:22:32.431 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 79726 ']' 00:22:32.431 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:32.431 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:32.431 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:32.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:32.431 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:32.431 09:04:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:32.690 [2024-07-25 09:04:39.575346] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:32.690 [2024-07-25 09:04:39.575549] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:32.690 [2024-07-25 09:04:39.758077] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:32.948 [2024-07-25 09:04:40.015021] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:32.948 [2024-07-25 09:04:40.015096] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:32.948 [2024-07-25 09:04:40.015114] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:32.948 [2024-07-25 09:04:40.015130] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:32.948 [2024-07-25 09:04:40.015146] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:32.948 [2024-07-25 09:04:40.015387] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:32.948 [2024-07-25 09:04:40.016044] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:32.948 [2024-07-25 09:04:40.016101] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:32.948 [2024-07-25 09:04:40.016126] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:33.206 [2024-07-25 09:04:40.220793] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:22:33.463 09:04:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:33.463 09:04:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:22:33.463 09:04:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:33.463 09:04:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.463 09:04:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:33.463 [2024-07-25 09:04:40.474740] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:33.463 09:04:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.463 09:04:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:22:33.463 09:04:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:33.463 09:04:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:33.463 09:04:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:33.463 09:04:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.463 09:04:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:33.721 Malloc0 00:22:33.721 09:04:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.721 09:04:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:33.721 09:04:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.721 09:04:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:33.721 09:04:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.721 09:04:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:22:33.721 09:04:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.721 09:04:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:33.721 09:04:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.721 09:04:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:33.721 09:04:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.721 09:04:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:33.721 [2024-07-25 09:04:40.623759] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:33.721 09:04:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.721 09:04:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:33.721 09:04:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.721 09:04:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:33.721 09:04:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.721 09:04:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:22:33.721 09:04:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.722 09:04:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:33.722 [ 00:22:33.722 { 00:22:33.722 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:33.722 "subtype": "Discovery", 00:22:33.722 "listen_addresses": [ 00:22:33.722 { 00:22:33.722 "trtype": "TCP", 00:22:33.722 "adrfam": "IPv4", 00:22:33.722 "traddr": "10.0.0.2", 00:22:33.722 "trsvcid": "4420" 00:22:33.722 } 00:22:33.722 ], 00:22:33.722 "allow_any_host": true, 00:22:33.722 "hosts": [] 00:22:33.722 }, 00:22:33.722 { 00:22:33.722 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:33.722 "subtype": "NVMe", 00:22:33.722 "listen_addresses": [ 00:22:33.722 { 00:22:33.722 "trtype": "TCP", 00:22:33.722 "adrfam": "IPv4", 00:22:33.722 "traddr": "10.0.0.2", 00:22:33.722 "trsvcid": "4420" 00:22:33.722 } 00:22:33.722 ], 00:22:33.722 "allow_any_host": true, 00:22:33.722 "hosts": [], 00:22:33.722 "serial_number": "SPDK00000000000001", 00:22:33.722 "model_number": "SPDK bdev Controller", 00:22:33.722 "max_namespaces": 32, 00:22:33.722 "min_cntlid": 1, 00:22:33.722 "max_cntlid": 65519, 00:22:33.722 "namespaces": [ 00:22:33.722 { 00:22:33.722 "nsid": 1, 00:22:33.722 "bdev_name": "Malloc0", 00:22:33.722 "name": "Malloc0", 00:22:33.722 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:22:33.722 "eui64": "ABCDEF0123456789", 00:22:33.722 "uuid": "1b9225fd-ba89-4072-aab8-fd196194e0f5" 00:22:33.722 } 00:22:33.722 ] 00:22:33.722 } 00:22:33.722 ] 00:22:33.722 09:04:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.722 09:04:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:22:33.722 [2024-07-25 09:04:40.705343] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:33.722 [2024-07-25 09:04:40.705478] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79762 ] 00:22:33.984 [2024-07-25 09:04:40.873254] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:22:33.984 [2024-07-25 09:04:40.873399] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:33.984 [2024-07-25 09:04:40.873415] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:33.984 [2024-07-25 09:04:40.873448] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:33.984 [2024-07-25 09:04:40.873466] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:33.984 [2024-07-25 09:04:40.873630] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:22:33.984 [2024-07-25 09:04:40.873702] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x61500000f080 0 00:22:33.984 [2024-07-25 09:04:40.880860] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:33.984 [2024-07-25 09:04:40.880902] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:33.984 [2024-07-25 09:04:40.880920] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:33.984 [2024-07-25 09:04:40.880931] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:33.984 [2024-07-25 09:04:40.881029] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:33.984 [2024-07-25 09:04:40.881045] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:33.984 [2024-07-25 09:04:40.881054] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:22:33.984 [2024-07-25 09:04:40.881080] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:33.984 [2024-07-25 09:04:40.881123] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:33.984 [2024-07-25 09:04:40.888847] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:33.984 [2024-07-25 09:04:40.888883] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:33.984 [2024-07-25 09:04:40.888900] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:33.984 [2024-07-25 09:04:40.888913] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:22:33.984 [2024-07-25 09:04:40.888934] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:33.984 [2024-07-25 09:04:40.888958] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:22:33.984 [2024-07-25 09:04:40.888971] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:22:33.984 [2024-07-25 09:04:40.888994] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:33.984 [2024-07-25 09:04:40.889004] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:33.984 [2024-07-25 09:04:40.889012] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:22:33.984 [2024-07-25 09:04:40.889030] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.984 [2024-07-25 09:04:40.889075] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:33.984 [2024-07-25 09:04:40.889195] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:33.984 [2024-07-25 09:04:40.889211] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:33.984 [2024-07-25 09:04:40.889219] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:33.984 [2024-07-25 09:04:40.889227] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:22:33.984 [2024-07-25 09:04:40.889239] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:22:33.984 [2024-07-25 09:04:40.889253] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:22:33.984 [2024-07-25 09:04:40.889268] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:33.984 [2024-07-25 09:04:40.889280] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:33.984 [2024-07-25 09:04:40.889291] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:22:33.984 [2024-07-25 09:04:40.889310] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.984 [2024-07-25 09:04:40.889343] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:33.984 [2024-07-25 09:04:40.889442] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:33.984 [2024-07-25 09:04:40.889459] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:33.984 [2024-07-25 09:04:40.889472] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:33.984 [2024-07-25 09:04:40.889484] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:22:33.984 [2024-07-25 09:04:40.889502] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:22:33.984 [2024-07-25 09:04:40.889526] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:22:33.984 [2024-07-25 09:04:40.889543] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:33.984 [2024-07-25 09:04:40.889551] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:33.984 [2024-07-25 09:04:40.889559] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:22:33.984 [2024-07-25 09:04:40.889574] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.984 [2024-07-25 09:04:40.889614] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:33.984 [2024-07-25 09:04:40.889709] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:33.984 [2024-07-25 09:04:40.889722] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:33.984 [2024-07-25 09:04:40.889728] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:33.984 [2024-07-25 09:04:40.889736] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:22:33.984 [2024-07-25 09:04:40.889752] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:33.984 [2024-07-25 09:04:40.889807] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:33.984 [2024-07-25 09:04:40.889836] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:33.984 [2024-07-25 09:04:40.889845] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:22:33.984 [2024-07-25 09:04:40.889860] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.984 [2024-07-25 09:04:40.889891] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:33.985 [2024-07-25 09:04:40.889968] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:33.985 [2024-07-25 09:04:40.889986] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:33.985 [2024-07-25 09:04:40.889994] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:33.985 [2024-07-25 09:04:40.890002] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:22:33.985 [2024-07-25 09:04:40.890012] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:22:33.985 [2024-07-25 09:04:40.890022] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:22:33.985 [2024-07-25 09:04:40.890036] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:33.985 [2024-07-25 09:04:40.890146] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:22:33.985 [2024-07-25 09:04:40.890160] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:33.985 [2024-07-25 09:04:40.890176] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:33.985 [2024-07-25 09:04:40.890185] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:33.985 [2024-07-25 09:04:40.890193] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:22:33.985 [2024-07-25 09:04:40.890208] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.985 [2024-07-25 09:04:40.890242] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:33.985 [2024-07-25 09:04:40.890324] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:33.985 [2024-07-25 09:04:40.890337] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:33.985 [2024-07-25 09:04:40.890343] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:33.985 [2024-07-25 09:04:40.890351] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:22:33.985 [2024-07-25 09:04:40.890360] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:33.985 [2024-07-25 09:04:40.890383] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:33.985 [2024-07-25 09:04:40.890392] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:33.985 [2024-07-25 09:04:40.890403] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:22:33.985 [2024-07-25 09:04:40.890417] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.985 [2024-07-25 09:04:40.890445] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:33.985 [2024-07-25 09:04:40.890519] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:33.985 [2024-07-25 09:04:40.890538] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:33.985 [2024-07-25 09:04:40.890545] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:33.985 [2024-07-25 09:04:40.890552] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:22:33.985 [2024-07-25 09:04:40.890562] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:33.985 [2024-07-25 09:04:40.890572] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:22:33.985 [2024-07-25 09:04:40.890586] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:22:33.985 [2024-07-25 09:04:40.890617] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:22:33.985 [2024-07-25 09:04:40.890643] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:33.985 [2024-07-25 09:04:40.890653] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:22:33.985 [2024-07-25 09:04:40.890668] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.985 [2024-07-25 09:04:40.890713] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:33.985 [2024-07-25 09:04:40.890848] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:33.985 [2024-07-25 09:04:40.890863] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:33.985 [2024-07-25 09:04:40.890870] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:33.985 [2024-07-25 09:04:40.890888] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=0 00:22:33.985 [2024-07-25 09:04:40.890900] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:22:33.985 [2024-07-25 09:04:40.890909] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:33.985 [2024-07-25 09:04:40.890928] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:33.985 [2024-07-25 09:04:40.890938] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:33.985 [2024-07-25 09:04:40.890953] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:33.985 [2024-07-25 09:04:40.890963] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:33.985 [2024-07-25 09:04:40.890969] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:33.985 [2024-07-25 09:04:40.890976] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:22:33.985 [2024-07-25 09:04:40.890996] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:22:33.985 [2024-07-25 09:04:40.891006] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:22:33.985 [2024-07-25 09:04:40.891014] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:22:33.985 [2024-07-25 09:04:40.891027] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:22:33.985 [2024-07-25 09:04:40.891037] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:22:33.985 [2024-07-25 09:04:40.891050] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:22:33.985 [2024-07-25 09:04:40.891069] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:22:33.985 [2024-07-25 09:04:40.891083] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:33.985 [2024-07-25 09:04:40.891092] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:33.985 [2024-07-25 09:04:40.891099] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:22:33.985 [2024-07-25 09:04:40.891115] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:33.985 [2024-07-25 09:04:40.891152] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:33.985 [2024-07-25 09:04:40.891229] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:33.985 [2024-07-25 09:04:40.891242] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:33.985 [2024-07-25 09:04:40.891248] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:33.985 [2024-07-25 09:04:40.891255] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:22:33.985 [2024-07-25 09:04:40.891269] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:33.985 [2024-07-25 09:04:40.891277] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:33.985 [2024-07-25 09:04:40.891284] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:22:33.985 [2024-07-25 09:04:40.891306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.985 [2024-07-25 09:04:40.891319] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:33.985 [2024-07-25 09:04:40.891328] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:33.985 [2024-07-25 09:04:40.891335] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x61500000f080) 00:22:33.985 [2024-07-25 09:04:40.891346] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.985 [2024-07-25 09:04:40.891357] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:33.985 [2024-07-25 09:04:40.891364] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:33.985 [2024-07-25 09:04:40.891370] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x61500000f080) 00:22:33.985 [2024-07-25 09:04:40.891380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.985 [2024-07-25 09:04:40.891390] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:33.985 [2024-07-25 09:04:40.891397] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:33.985 [2024-07-25 09:04:40.891403] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:33.985 [2024-07-25 09:04:40.891423] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.985 [2024-07-25 09:04:40.891433] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:22:33.985 [2024-07-25 09:04:40.891456] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:33.985 [2024-07-25 09:04:40.891469] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:33.985 [2024-07-25 09:04:40.891477] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:22:33.985 [2024-07-25 09:04:40.891490] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.986 [2024-07-25 09:04:40.891521] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:33.986 [2024-07-25 09:04:40.891533] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:22:33.986 [2024-07-25 09:04:40.891540] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:22:33.986 [2024-07-25 09:04:40.891548] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:33.986 [2024-07-25 09:04:40.891555] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:22:33.986 [2024-07-25 09:04:40.891678] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:33.986 [2024-07-25 09:04:40.891691] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:33.986 [2024-07-25 09:04:40.891697] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:33.986 [2024-07-25 09:04:40.891705] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:22:33.986 [2024-07-25 09:04:40.891715] nvme_ctrlr.c:3026:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:22:33.986 [2024-07-25 09:04:40.891725] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:22:33.986 [2024-07-25 09:04:40.891757] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:33.986 [2024-07-25 09:04:40.891768] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:22:33.986 [2024-07-25 09:04:40.891786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.986 [2024-07-25 09:04:40.891829] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:22:33.986 [2024-07-25 09:04:40.891930] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:33.986 [2024-07-25 09:04:40.891944] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:33.986 [2024-07-25 09:04:40.891951] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:33.986 [2024-07-25 09:04:40.891959] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=4 00:22:33.986 [2024-07-25 09:04:40.891968] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:22:33.986 [2024-07-25 09:04:40.892001] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:33.986 [2024-07-25 09:04:40.892021] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:33.986 [2024-07-25 09:04:40.892030] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:33.986 [2024-07-25 09:04:40.892045] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:33.986 [2024-07-25 09:04:40.892058] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:33.986 [2024-07-25 09:04:40.892065] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:33.986 [2024-07-25 09:04:40.892073] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:22:33.986 [2024-07-25 09:04:40.892101] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:22:33.986 [2024-07-25 09:04:40.892165] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:33.986 [2024-07-25 09:04:40.892180] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:22:33.986 [2024-07-25 09:04:40.892195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.986 [2024-07-25 09:04:40.892209] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:33.986 [2024-07-25 09:04:40.892217] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:33.986 [2024-07-25 09:04:40.892230] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:22:33.986 [2024-07-25 09:04:40.892245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.986 [2024-07-25 09:04:40.892288] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:22:33.986 [2024-07-25 09:04:40.892302] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:22:33.986 [2024-07-25 09:04:40.892702] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:33.986 [2024-07-25 09:04:40.892732] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:33.986 [2024-07-25 09:04:40.892742] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:33.986 [2024-07-25 09:04:40.892749] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=1024, cccid=4 00:22:33.986 [2024-07-25 09:04:40.892759] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=1024 00:22:33.986 [2024-07-25 09:04:40.892767] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:33.986 [2024-07-25 09:04:40.892780] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:33.986 [2024-07-25 09:04:40.892793] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:33.986 [2024-07-25 09:04:40.892805] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:33.986 [2024-07-25 09:04:40.896836] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:33.986 [2024-07-25 09:04:40.896861] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:33.986 [2024-07-25 09:04:40.896878] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:22:33.986 [2024-07-25 09:04:40.896898] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:33.986 [2024-07-25 09:04:40.896909] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:33.986 [2024-07-25 09:04:40.896915] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:33.986 [2024-07-25 09:04:40.896922] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:22:33.986 [2024-07-25 09:04:40.896952] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:33.986 [2024-07-25 09:04:40.896968] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:22:33.986 [2024-07-25 09:04:40.896986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.986 [2024-07-25 09:04:40.897030] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:22:33.986 [2024-07-25 09:04:40.897175] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:33.986 [2024-07-25 09:04:40.897187] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:33.986 [2024-07-25 09:04:40.897194] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:33.986 [2024-07-25 09:04:40.897201] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=3072, cccid=4 00:22:33.986 [2024-07-25 09:04:40.897209] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=3072 00:22:33.986 [2024-07-25 09:04:40.897220] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:33.986 [2024-07-25 09:04:40.897234] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:33.986 [2024-07-25 09:04:40.897242] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:33.986 [2024-07-25 09:04:40.897255] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:33.986 [2024-07-25 09:04:40.897265] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:33.986 [2024-07-25 09:04:40.897271] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:33.986 [2024-07-25 09:04:40.897279] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:22:33.986 [2024-07-25 09:04:40.897300] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:33.986 [2024-07-25 09:04:40.897311] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:22:33.986 [2024-07-25 09:04:40.897330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.986 [2024-07-25 09:04:40.897368] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:22:33.986 [2024-07-25 09:04:40.897477] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:33.986 [2024-07-25 09:04:40.897497] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:33.986 [2024-07-25 09:04:40.897505] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:33.986 [2024-07-25 09:04:40.897512] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=8, cccid=4 00:22:33.986 [2024-07-25 09:04:40.897520] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=8 00:22:33.986 [2024-07-25 09:04:40.897539] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:33.986 [2024-07-25 09:04:40.897552] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:33.986 [2024-07-25 09:04:40.897559] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:33.986 [2024-07-25 09:04:40.897591] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:33.986 [2024-07-25 09:04:40.897604] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:33.986 [2024-07-25 09:04:40.897610] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:33.986 [2024-07-25 09:04:40.897617] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:22:33.986 ===================================================== 00:22:33.986 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:33.986 ===================================================== 00:22:33.986 Controller Capabilities/Features 00:22:33.986 ================================ 00:22:33.986 Vendor ID: 0000 00:22:33.986 Subsystem Vendor ID: 0000 00:22:33.986 Serial Number: .................... 00:22:33.986 Model Number: ........................................ 00:22:33.987 Firmware Version: 24.09 00:22:33.987 Recommended Arb Burst: 0 00:22:33.987 IEEE OUI Identifier: 00 00 00 00:22:33.987 Multi-path I/O 00:22:33.987 May have multiple subsystem ports: No 00:22:33.987 May have multiple controllers: No 00:22:33.987 Associated with SR-IOV VF: No 00:22:33.987 Max Data Transfer Size: 131072 00:22:33.987 Max Number of Namespaces: 0 00:22:33.987 Max Number of I/O Queues: 1024 00:22:33.987 NVMe Specification Version (VS): 1.3 00:22:33.987 NVMe Specification Version (Identify): 1.3 00:22:33.987 Maximum Queue Entries: 128 00:22:33.987 Contiguous Queues Required: Yes 00:22:33.987 Arbitration Mechanisms Supported 00:22:33.987 Weighted Round Robin: Not Supported 00:22:33.987 Vendor Specific: Not Supported 00:22:33.987 Reset Timeout: 15000 ms 00:22:33.987 Doorbell Stride: 4 bytes 00:22:33.987 NVM Subsystem Reset: Not Supported 00:22:33.987 Command Sets Supported 00:22:33.987 NVM Command Set: Supported 00:22:33.987 Boot Partition: Not Supported 00:22:33.987 Memory Page Size Minimum: 4096 bytes 00:22:33.987 Memory Page Size Maximum: 4096 bytes 00:22:33.987 Persistent Memory Region: Not Supported 00:22:33.987 Optional Asynchronous Events Supported 00:22:33.987 Namespace Attribute Notices: Not Supported 00:22:33.987 Firmware Activation Notices: Not Supported 00:22:33.987 ANA Change Notices: Not Supported 00:22:33.987 PLE Aggregate Log Change Notices: Not Supported 00:22:33.987 LBA Status Info Alert Notices: Not Supported 00:22:33.987 EGE Aggregate Log Change Notices: Not Supported 00:22:33.987 Normal NVM Subsystem Shutdown event: Not Supported 00:22:33.987 Zone Descriptor Change Notices: Not Supported 00:22:33.987 Discovery Log Change Notices: Supported 00:22:33.987 Controller Attributes 00:22:33.987 128-bit Host Identifier: Not Supported 00:22:33.987 Non-Operational Permissive Mode: Not Supported 00:22:33.987 NVM Sets: Not Supported 00:22:33.987 Read Recovery Levels: Not Supported 00:22:33.987 Endurance Groups: Not Supported 00:22:33.987 Predictable Latency Mode: Not Supported 00:22:33.987 Traffic Based Keep ALive: Not Supported 00:22:33.987 Namespace Granularity: Not Supported 00:22:33.987 SQ Associations: Not Supported 00:22:33.987 UUID List: Not Supported 00:22:33.987 Multi-Domain Subsystem: Not Supported 00:22:33.987 Fixed Capacity Management: Not Supported 00:22:33.987 Variable Capacity Management: Not Supported 00:22:33.987 Delete Endurance Group: Not Supported 00:22:33.987 Delete NVM Set: Not Supported 00:22:33.987 Extended LBA Formats Supported: Not Supported 00:22:33.987 Flexible Data Placement Supported: Not Supported 00:22:33.987 00:22:33.987 Controller Memory Buffer Support 00:22:33.987 ================================ 00:22:33.987 Supported: No 00:22:33.987 00:22:33.987 Persistent Memory Region Support 00:22:33.987 ================================ 00:22:33.987 Supported: No 00:22:33.987 00:22:33.987 Admin Command Set Attributes 00:22:33.987 ============================ 00:22:33.987 Security Send/Receive: Not Supported 00:22:33.987 Format NVM: Not Supported 00:22:33.987 Firmware Activate/Download: Not Supported 00:22:33.987 Namespace Management: Not Supported 00:22:33.987 Device Self-Test: Not Supported 00:22:33.987 Directives: Not Supported 00:22:33.987 NVMe-MI: Not Supported 00:22:33.987 Virtualization Management: Not Supported 00:22:33.987 Doorbell Buffer Config: Not Supported 00:22:33.987 Get LBA Status Capability: Not Supported 00:22:33.987 Command & Feature Lockdown Capability: Not Supported 00:22:33.987 Abort Command Limit: 1 00:22:33.987 Async Event Request Limit: 4 00:22:33.987 Number of Firmware Slots: N/A 00:22:33.987 Firmware Slot 1 Read-Only: N/A 00:22:33.987 Firmware Activation Without Reset: N/A 00:22:33.987 Multiple Update Detection Support: N/A 00:22:33.987 Firmware Update Granularity: No Information Provided 00:22:33.987 Per-Namespace SMART Log: No 00:22:33.987 Asymmetric Namespace Access Log Page: Not Supported 00:22:33.987 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:33.987 Command Effects Log Page: Not Supported 00:22:33.987 Get Log Page Extended Data: Supported 00:22:33.987 Telemetry Log Pages: Not Supported 00:22:33.987 Persistent Event Log Pages: Not Supported 00:22:33.987 Supported Log Pages Log Page: May Support 00:22:33.987 Commands Supported & Effects Log Page: Not Supported 00:22:33.987 Feature Identifiers & Effects Log Page:May Support 00:22:33.987 NVMe-MI Commands & Effects Log Page: May Support 00:22:33.987 Data Area 4 for Telemetry Log: Not Supported 00:22:33.987 Error Log Page Entries Supported: 128 00:22:33.987 Keep Alive: Not Supported 00:22:33.987 00:22:33.987 NVM Command Set Attributes 00:22:33.987 ========================== 00:22:33.987 Submission Queue Entry Size 00:22:33.987 Max: 1 00:22:33.987 Min: 1 00:22:33.987 Completion Queue Entry Size 00:22:33.987 Max: 1 00:22:33.987 Min: 1 00:22:33.987 Number of Namespaces: 0 00:22:33.987 Compare Command: Not Supported 00:22:33.987 Write Uncorrectable Command: Not Supported 00:22:33.987 Dataset Management Command: Not Supported 00:22:33.987 Write Zeroes Command: Not Supported 00:22:33.987 Set Features Save Field: Not Supported 00:22:33.987 Reservations: Not Supported 00:22:33.987 Timestamp: Not Supported 00:22:33.987 Copy: Not Supported 00:22:33.987 Volatile Write Cache: Not Present 00:22:33.987 Atomic Write Unit (Normal): 1 00:22:33.987 Atomic Write Unit (PFail): 1 00:22:33.987 Atomic Compare & Write Unit: 1 00:22:33.987 Fused Compare & Write: Supported 00:22:33.987 Scatter-Gather List 00:22:33.987 SGL Command Set: Supported 00:22:33.987 SGL Keyed: Supported 00:22:33.987 SGL Bit Bucket Descriptor: Not Supported 00:22:33.987 SGL Metadata Pointer: Not Supported 00:22:33.987 Oversized SGL: Not Supported 00:22:33.987 SGL Metadata Address: Not Supported 00:22:33.987 SGL Offset: Supported 00:22:33.987 Transport SGL Data Block: Not Supported 00:22:33.987 Replay Protected Memory Block: Not Supported 00:22:33.987 00:22:33.987 Firmware Slot Information 00:22:33.987 ========================= 00:22:33.987 Active slot: 0 00:22:33.987 00:22:33.987 00:22:33.987 Error Log 00:22:33.987 ========= 00:22:33.987 00:22:33.987 Active Namespaces 00:22:33.987 ================= 00:22:33.987 Discovery Log Page 00:22:33.987 ================== 00:22:33.987 Generation Counter: 2 00:22:33.987 Number of Records: 2 00:22:33.987 Record Format: 0 00:22:33.987 00:22:33.987 Discovery Log Entry 0 00:22:33.987 ---------------------- 00:22:33.987 Transport Type: 3 (TCP) 00:22:33.987 Address Family: 1 (IPv4) 00:22:33.987 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:33.987 Entry Flags: 00:22:33.987 Duplicate Returned Information: 1 00:22:33.987 Explicit Persistent Connection Support for Discovery: 1 00:22:33.987 Transport Requirements: 00:22:33.987 Secure Channel: Not Required 00:22:33.987 Port ID: 0 (0x0000) 00:22:33.987 Controller ID: 65535 (0xffff) 00:22:33.987 Admin Max SQ Size: 128 00:22:33.987 Transport Service Identifier: 4420 00:22:33.987 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:33.987 Transport Address: 10.0.0.2 00:22:33.987 Discovery Log Entry 1 00:22:33.987 ---------------------- 00:22:33.987 Transport Type: 3 (TCP) 00:22:33.987 Address Family: 1 (IPv4) 00:22:33.987 Subsystem Type: 2 (NVM Subsystem) 00:22:33.987 Entry Flags: 00:22:33.987 Duplicate Returned Information: 0 00:22:33.987 Explicit Persistent Connection Support for Discovery: 0 00:22:33.987 Transport Requirements: 00:22:33.988 Secure Channel: Not Required 00:22:33.988 Port ID: 0 (0x0000) 00:22:33.988 Controller ID: 65535 (0xffff) 00:22:33.988 Admin Max SQ Size: 128 00:22:33.988 Transport Service Identifier: 4420 00:22:33.988 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:22:33.988 Transport Address: 10.0.0.2 [2024-07-25 09:04:40.897786] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:22:33.988 [2024-07-25 09:04:40.897828] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:22:33.988 [2024-07-25 09:04:40.897845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.988 [2024-07-25 09:04:40.897856] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x61500000f080 00:22:33.988 [2024-07-25 09:04:40.897865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.988 [2024-07-25 09:04:40.897873] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x61500000f080 00:22:33.988 [2024-07-25 09:04:40.897882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.988 [2024-07-25 09:04:40.897890] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:33.988 [2024-07-25 09:04:40.897898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.988 [2024-07-25 09:04:40.897919] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:33.988 [2024-07-25 09:04:40.897929] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:33.988 [2024-07-25 09:04:40.897937] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:33.988 [2024-07-25 09:04:40.897952] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.988 [2024-07-25 09:04:40.897987] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:33.988 [2024-07-25 09:04:40.898065] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:33.988 [2024-07-25 09:04:40.898085] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:33.988 [2024-07-25 09:04:40.898094] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:33.988 [2024-07-25 09:04:40.898107] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:33.988 [2024-07-25 09:04:40.898122] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:33.988 [2024-07-25 09:04:40.898130] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:33.988 [2024-07-25 09:04:40.898138] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:33.988 [2024-07-25 09:04:40.898152] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.988 [2024-07-25 09:04:40.898187] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:33.988 [2024-07-25 09:04:40.898286] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:33.988 [2024-07-25 09:04:40.898303] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:33.988 [2024-07-25 09:04:40.898310] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:33.988 [2024-07-25 09:04:40.898317] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:33.988 [2024-07-25 09:04:40.898327] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:22:33.988 [2024-07-25 09:04:40.898337] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:22:33.988 [2024-07-25 09:04:40.898360] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:33.988 [2024-07-25 09:04:40.898370] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:33.988 [2024-07-25 09:04:40.898378] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:33.988 [2024-07-25 09:04:40.898398] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.988 [2024-07-25 09:04:40.898430] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:33.988 [2024-07-25 09:04:40.898507] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:33.988 [2024-07-25 09:04:40.898519] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:33.988 [2024-07-25 09:04:40.898526] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:33.988 [2024-07-25 09:04:40.898533] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:33.988 [2024-07-25 09:04:40.898558] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:33.988 [2024-07-25 09:04:40.898568] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:33.988 [2024-07-25 09:04:40.898574] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:33.988 [2024-07-25 09:04:40.898587] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.988 [2024-07-25 09:04:40.898614] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:33.988 [2024-07-25 09:04:40.898683] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:33.988 [2024-07-25 09:04:40.898699] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:33.988 [2024-07-25 09:04:40.898706] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:33.988 [2024-07-25 09:04:40.898713] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:33.988 [2024-07-25 09:04:40.898731] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:33.988 [2024-07-25 09:04:40.898739] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:33.988 [2024-07-25 09:04:40.898746] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:33.988 [2024-07-25 09:04:40.898759] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.988 [2024-07-25 09:04:40.898786] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:33.988 [2024-07-25 09:04:40.898886] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:33.988 [2024-07-25 09:04:40.898901] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:33.988 [2024-07-25 09:04:40.898907] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:33.988 [2024-07-25 09:04:40.898915] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:33.988 [2024-07-25 09:04:40.898934] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:33.988 [2024-07-25 09:04:40.898943] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:33.988 [2024-07-25 09:04:40.898950] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:33.988 [2024-07-25 09:04:40.898963] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.988 [2024-07-25 09:04:40.898991] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:33.988 [2024-07-25 09:04:40.899064] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:33.988 [2024-07-25 09:04:40.899078] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:33.988 [2024-07-25 09:04:40.899085] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:33.988 [2024-07-25 09:04:40.899092] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:33.988 [2024-07-25 09:04:40.899110] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:33.988 [2024-07-25 09:04:40.899118] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:33.988 [2024-07-25 09:04:40.899125] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:33.988 [2024-07-25 09:04:40.899138] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.988 [2024-07-25 09:04:40.899172] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:33.988 [2024-07-25 09:04:40.899242] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:33.988 [2024-07-25 09:04:40.899254] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:33.988 [2024-07-25 09:04:40.899261] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:33.988 [2024-07-25 09:04:40.899268] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:33.988 [2024-07-25 09:04:40.899285] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:33.988 [2024-07-25 09:04:40.899300] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:33.988 [2024-07-25 09:04:40.899308] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:33.988 [2024-07-25 09:04:40.899321] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.988 [2024-07-25 09:04:40.899348] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:33.988 [2024-07-25 09:04:40.899425] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:33.988 [2024-07-25 09:04:40.899437] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:33.988 [2024-07-25 09:04:40.899444] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:33.988 [2024-07-25 09:04:40.899454] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:33.988 [2024-07-25 09:04:40.899472] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:33.988 [2024-07-25 09:04:40.899481] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:33.989 [2024-07-25 09:04:40.899487] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:33.989 [2024-07-25 09:04:40.899500] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.989 [2024-07-25 09:04:40.899526] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:33.989 [2024-07-25 09:04:40.899587] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:33.989 [2024-07-25 09:04:40.899603] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:33.989 [2024-07-25 09:04:40.899610] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:33.989 [2024-07-25 09:04:40.899617] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:33.989 [2024-07-25 09:04:40.899634] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:33.989 [2024-07-25 09:04:40.899642] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:33.989 [2024-07-25 09:04:40.899649] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:33.989 [2024-07-25 09:04:40.899662] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.989 [2024-07-25 09:04:40.899688] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:33.989 [2024-07-25 09:04:40.899749] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:33.989 [2024-07-25 09:04:40.899762] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:33.989 [2024-07-25 09:04:40.899768] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:33.989 [2024-07-25 09:04:40.899776] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:33.989 [2024-07-25 09:04:40.899793] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:33.989 [2024-07-25 09:04:40.899802] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:33.989 [2024-07-25 09:04:40.899808] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:33.989 [2024-07-25 09:04:40.899845] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.989 [2024-07-25 09:04:40.899876] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:33.989 [2024-07-25 09:04:40.899949] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:33.989 [2024-07-25 09:04:40.899987] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:33.989 [2024-07-25 09:04:40.899996] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:33.989 [2024-07-25 09:04:40.900004] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:33.989 [2024-07-25 09:04:40.900030] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:33.989 [2024-07-25 09:04:40.900039] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:33.989 [2024-07-25 09:04:40.900046] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:33.989 [2024-07-25 09:04:40.900063] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.989 [2024-07-25 09:04:40.900092] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:33.989 [2024-07-25 09:04:40.900252] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:33.989 [2024-07-25 09:04:40.900274] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:33.989 [2024-07-25 09:04:40.900282] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:33.989 [2024-07-25 09:04:40.900289] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:33.989 [2024-07-25 09:04:40.900308] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:33.989 [2024-07-25 09:04:40.900330] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:33.989 [2024-07-25 09:04:40.900337] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:33.989 [2024-07-25 09:04:40.900350] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.989 [2024-07-25 09:04:40.900386] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:33.989 [2024-07-25 09:04:40.900455] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:33.989 [2024-07-25 09:04:40.900467] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:33.989 [2024-07-25 09:04:40.900474] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:33.989 [2024-07-25 09:04:40.900481] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:33.989 [2024-07-25 09:04:40.900498] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:33.989 [2024-07-25 09:04:40.900517] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:33.989 [2024-07-25 09:04:40.900524] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:33.989 [2024-07-25 09:04:40.900537] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.989 [2024-07-25 09:04:40.900564] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:33.989 [2024-07-25 09:04:40.900634] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:33.989 [2024-07-25 09:04:40.900656] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:33.989 [2024-07-25 09:04:40.900664] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:33.989 [2024-07-25 09:04:40.900675] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:33.989 [2024-07-25 09:04:40.900695] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:33.989 [2024-07-25 09:04:40.900704] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:33.989 [2024-07-25 09:04:40.900711] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:33.989 [2024-07-25 09:04:40.900723] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.989 [2024-07-25 09:04:40.900750] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:33.989 [2024-07-25 09:04:40.904847] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:33.989 [2024-07-25 09:04:40.904877] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:33.989 [2024-07-25 09:04:40.904886] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:33.989 [2024-07-25 09:04:40.904894] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:33.989 [2024-07-25 09:04:40.904917] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:33.989 [2024-07-25 09:04:40.904927] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:33.989 [2024-07-25 09:04:40.904934] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:33.989 [2024-07-25 09:04:40.904958] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.989 [2024-07-25 09:04:40.904994] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:33.989 [2024-07-25 09:04:40.905074] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:33.989 [2024-07-25 09:04:40.905087] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:33.989 [2024-07-25 09:04:40.905094] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:33.989 [2024-07-25 09:04:40.905101] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:33.989 [2024-07-25 09:04:40.905116] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:22:33.989 00:22:33.989 09:04:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:22:33.989 [2024-07-25 09:04:41.036250] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:33.990 [2024-07-25 09:04:41.036371] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79774 ] 00:22:34.254 [2024-07-25 09:04:41.216048] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:22:34.255 [2024-07-25 09:04:41.216254] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:34.255 [2024-07-25 09:04:41.216274] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:34.255 [2024-07-25 09:04:41.216308] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:34.255 [2024-07-25 09:04:41.216332] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:34.255 [2024-07-25 09:04:41.216554] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:22:34.255 [2024-07-25 09:04:41.216641] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x61500000f080 0 00:22:34.255 [2024-07-25 09:04:41.223854] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:34.255 [2024-07-25 09:04:41.223907] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:34.255 [2024-07-25 09:04:41.223925] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:34.255 [2024-07-25 09:04:41.223946] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:34.255 [2024-07-25 09:04:41.224089] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:34.255 [2024-07-25 09:04:41.224110] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:34.255 [2024-07-25 09:04:41.224122] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:22:34.255 [2024-07-25 09:04:41.224153] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:34.255 [2024-07-25 09:04:41.224207] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:34.255 [2024-07-25 09:04:41.231854] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:34.255 [2024-07-25 09:04:41.231897] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:34.255 [2024-07-25 09:04:41.231910] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:34.255 [2024-07-25 09:04:41.231922] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:22:34.255 [2024-07-25 09:04:41.231959] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:34.255 [2024-07-25 09:04:41.231997] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:22:34.255 [2024-07-25 09:04:41.232022] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:22:34.255 [2024-07-25 09:04:41.232046] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:34.255 [2024-07-25 09:04:41.232058] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:34.255 [2024-07-25 09:04:41.232068] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:22:34.255 [2024-07-25 09:04:41.232089] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.255 [2024-07-25 09:04:41.232143] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:34.255 [2024-07-25 09:04:41.232229] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:34.255 [2024-07-25 09:04:41.232246] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:34.255 [2024-07-25 09:04:41.232260] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:34.255 [2024-07-25 09:04:41.232270] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:22:34.255 [2024-07-25 09:04:41.232285] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:22:34.255 [2024-07-25 09:04:41.232303] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:22:34.255 [2024-07-25 09:04:41.232320] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:34.255 [2024-07-25 09:04:41.232331] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:34.255 [2024-07-25 09:04:41.232344] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:22:34.255 [2024-07-25 09:04:41.232368] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.255 [2024-07-25 09:04:41.232406] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:34.255 [2024-07-25 09:04:41.232477] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:34.255 [2024-07-25 09:04:41.232493] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:34.255 [2024-07-25 09:04:41.232501] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:34.255 [2024-07-25 09:04:41.232510] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:22:34.255 [2024-07-25 09:04:41.232524] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:22:34.255 [2024-07-25 09:04:41.232564] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:22:34.255 [2024-07-25 09:04:41.232582] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:34.255 [2024-07-25 09:04:41.232592] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:34.255 [2024-07-25 09:04:41.232602] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:22:34.255 [2024-07-25 09:04:41.232620] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.255 [2024-07-25 09:04:41.232659] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:34.255 [2024-07-25 09:04:41.232722] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:34.255 [2024-07-25 09:04:41.232737] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:34.255 [2024-07-25 09:04:41.232749] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:34.255 [2024-07-25 09:04:41.232759] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:22:34.255 [2024-07-25 09:04:41.232773] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:34.255 [2024-07-25 09:04:41.232795] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:34.255 [2024-07-25 09:04:41.232806] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:34.255 [2024-07-25 09:04:41.232834] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:22:34.255 [2024-07-25 09:04:41.232861] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.255 [2024-07-25 09:04:41.232898] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:34.255 [2024-07-25 09:04:41.232980] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:34.255 [2024-07-25 09:04:41.232996] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:34.255 [2024-07-25 09:04:41.233004] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:34.255 [2024-07-25 09:04:41.233013] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:22:34.255 [2024-07-25 09:04:41.233025] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:22:34.255 [2024-07-25 09:04:41.233038] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:22:34.255 [2024-07-25 09:04:41.233055] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:34.255 [2024-07-25 09:04:41.233168] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:22:34.255 [2024-07-25 09:04:41.233179] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:34.255 [2024-07-25 09:04:41.233198] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:34.255 [2024-07-25 09:04:41.233209] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:34.255 [2024-07-25 09:04:41.233219] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:22:34.255 [2024-07-25 09:04:41.233246] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.255 [2024-07-25 09:04:41.233282] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:34.255 [2024-07-25 09:04:41.233357] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:34.255 [2024-07-25 09:04:41.233372] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:34.255 [2024-07-25 09:04:41.233380] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:34.255 [2024-07-25 09:04:41.233389] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:22:34.255 [2024-07-25 09:04:41.233402] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:34.255 [2024-07-25 09:04:41.233431] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:34.255 [2024-07-25 09:04:41.233443] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:34.255 [2024-07-25 09:04:41.233453] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:22:34.255 [2024-07-25 09:04:41.233471] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.255 [2024-07-25 09:04:41.233505] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:34.255 [2024-07-25 09:04:41.233578] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:34.255 [2024-07-25 09:04:41.233593] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:34.255 [2024-07-25 09:04:41.233601] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:34.255 [2024-07-25 09:04:41.233614] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:22:34.255 [2024-07-25 09:04:41.233627] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:34.256 [2024-07-25 09:04:41.233639] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:22:34.256 [2024-07-25 09:04:41.233660] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:22:34.256 [2024-07-25 09:04:41.233683] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:22:34.256 [2024-07-25 09:04:41.233711] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:34.256 [2024-07-25 09:04:41.233722] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:22:34.256 [2024-07-25 09:04:41.233740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.256 [2024-07-25 09:04:41.233794] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:34.256 [2024-07-25 09:04:41.233936] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:34.256 [2024-07-25 09:04:41.233954] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:34.256 [2024-07-25 09:04:41.233962] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:34.256 [2024-07-25 09:04:41.233972] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=0 00:22:34.256 [2024-07-25 09:04:41.233984] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:22:34.256 [2024-07-25 09:04:41.233994] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:34.256 [2024-07-25 09:04:41.234017] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:34.256 [2024-07-25 09:04:41.234032] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:34.256 [2024-07-25 09:04:41.234053] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:34.256 [2024-07-25 09:04:41.234066] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:34.256 [2024-07-25 09:04:41.234074] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:34.256 [2024-07-25 09:04:41.234083] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:22:34.256 [2024-07-25 09:04:41.234107] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:22:34.256 [2024-07-25 09:04:41.234119] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:22:34.256 [2024-07-25 09:04:41.234130] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:22:34.256 [2024-07-25 09:04:41.234141] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:22:34.256 [2024-07-25 09:04:41.234152] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:22:34.256 [2024-07-25 09:04:41.234164] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:22:34.256 [2024-07-25 09:04:41.234192] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:22:34.256 [2024-07-25 09:04:41.234213] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:34.256 [2024-07-25 09:04:41.234225] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:34.256 [2024-07-25 09:04:41.234234] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:22:34.256 [2024-07-25 09:04:41.234257] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:34.256 [2024-07-25 09:04:41.234302] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:34.256 [2024-07-25 09:04:41.234374] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:34.256 [2024-07-25 09:04:41.234388] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:34.256 [2024-07-25 09:04:41.234396] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:34.256 [2024-07-25 09:04:41.234410] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:22:34.256 [2024-07-25 09:04:41.234428] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:34.256 [2024-07-25 09:04:41.234439] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:34.256 [2024-07-25 09:04:41.234449] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:22:34.256 [2024-07-25 09:04:41.234471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:34.256 [2024-07-25 09:04:41.234487] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:34.256 [2024-07-25 09:04:41.234496] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:34.256 [2024-07-25 09:04:41.234504] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x61500000f080) 00:22:34.256 [2024-07-25 09:04:41.234518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:34.256 [2024-07-25 09:04:41.234531] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:34.256 [2024-07-25 09:04:41.234539] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:34.256 [2024-07-25 09:04:41.234547] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x61500000f080) 00:22:34.256 [2024-07-25 09:04:41.234565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:34.256 [2024-07-25 09:04:41.234578] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:34.256 [2024-07-25 09:04:41.234587] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:34.256 [2024-07-25 09:04:41.234595] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:34.256 [2024-07-25 09:04:41.234612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:34.256 [2024-07-25 09:04:41.234624] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:34.256 [2024-07-25 09:04:41.234647] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:34.256 [2024-07-25 09:04:41.234662] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:34.256 [2024-07-25 09:04:41.234672] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:22:34.256 [2024-07-25 09:04:41.234694] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.256 [2024-07-25 09:04:41.234732] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:34.256 [2024-07-25 09:04:41.234746] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:22:34.256 [2024-07-25 09:04:41.234761] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:22:34.256 [2024-07-25 09:04:41.234772] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:34.256 [2024-07-25 09:04:41.234782] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:22:34.256 [2024-07-25 09:04:41.234906] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:34.256 [2024-07-25 09:04:41.234924] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:34.256 [2024-07-25 09:04:41.234933] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:34.256 [2024-07-25 09:04:41.234949] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:22:34.256 [2024-07-25 09:04:41.234964] nvme_ctrlr.c:3026:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:22:34.256 [2024-07-25 09:04:41.234977] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:22:34.256 [2024-07-25 09:04:41.234998] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:22:34.256 [2024-07-25 09:04:41.235036] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:22:34.256 [2024-07-25 09:04:41.235053] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:34.256 [2024-07-25 09:04:41.235063] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:34.256 [2024-07-25 09:04:41.235079] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:22:34.256 [2024-07-25 09:04:41.235097] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:34.256 [2024-07-25 09:04:41.235137] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:22:34.256 [2024-07-25 09:04:41.235207] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:34.256 [2024-07-25 09:04:41.235222] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:34.256 [2024-07-25 09:04:41.235230] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:34.256 [2024-07-25 09:04:41.235239] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:22:34.256 [2024-07-25 09:04:41.235354] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:22:34.256 [2024-07-25 09:04:41.235386] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:22:34.256 [2024-07-25 09:04:41.235407] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:34.256 [2024-07-25 09:04:41.235418] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:22:34.256 [2024-07-25 09:04:41.235436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.256 [2024-07-25 09:04:41.235471] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:22:34.256 [2024-07-25 09:04:41.235571] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:34.256 [2024-07-25 09:04:41.235589] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:34.256 [2024-07-25 09:04:41.235598] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:34.257 [2024-07-25 09:04:41.235607] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=4 00:22:34.257 [2024-07-25 09:04:41.235618] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:22:34.257 [2024-07-25 09:04:41.235627] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:34.257 [2024-07-25 09:04:41.235651] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:34.257 [2024-07-25 09:04:41.235661] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:34.257 [2024-07-25 09:04:41.235677] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:34.257 [2024-07-25 09:04:41.235693] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:34.257 [2024-07-25 09:04:41.235702] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:34.257 [2024-07-25 09:04:41.235711] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:22:34.257 [2024-07-25 09:04:41.235755] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:22:34.257 [2024-07-25 09:04:41.235788] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:22:34.257 [2024-07-25 09:04:41.239863] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:22:34.257 [2024-07-25 09:04:41.239943] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:34.257 [2024-07-25 09:04:41.239966] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:22:34.257 [2024-07-25 09:04:41.240033] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.257 [2024-07-25 09:04:41.240103] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:22:34.257 [2024-07-25 09:04:41.240211] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:34.257 [2024-07-25 09:04:41.240240] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:34.257 [2024-07-25 09:04:41.240255] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:34.257 [2024-07-25 09:04:41.240270] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=4 00:22:34.257 [2024-07-25 09:04:41.240287] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:22:34.257 [2024-07-25 09:04:41.240308] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:34.257 [2024-07-25 09:04:41.240342] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:34.257 [2024-07-25 09:04:41.240361] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:34.257 [2024-07-25 09:04:41.240389] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:34.257 [2024-07-25 09:04:41.240412] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:34.257 [2024-07-25 09:04:41.240426] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:34.257 [2024-07-25 09:04:41.240442] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:22:34.257 [2024-07-25 09:04:41.240518] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:22:34.257 [2024-07-25 09:04:41.240590] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:22:34.257 [2024-07-25 09:04:41.240649] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:34.257 [2024-07-25 09:04:41.240686] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:22:34.257 [2024-07-25 09:04:41.240726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.257 [2024-07-25 09:04:41.240803] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:22:34.257 [2024-07-25 09:04:41.240922] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:34.257 [2024-07-25 09:04:41.240948] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:34.257 [2024-07-25 09:04:41.240963] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:34.257 [2024-07-25 09:04:41.240979] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=4 00:22:34.257 [2024-07-25 09:04:41.240996] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:22:34.257 [2024-07-25 09:04:41.241013] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:34.257 [2024-07-25 09:04:41.241049] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:34.257 [2024-07-25 09:04:41.241066] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:34.257 [2024-07-25 09:04:41.241093] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:34.257 [2024-07-25 09:04:41.241116] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:34.257 [2024-07-25 09:04:41.241131] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:34.257 [2024-07-25 09:04:41.241147] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:22:34.257 [2024-07-25 09:04:41.241219] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:22:34.257 [2024-07-25 09:04:41.241277] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:22:34.257 [2024-07-25 09:04:41.241312] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:22:34.257 [2024-07-25 09:04:41.241339] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:22:34.257 [2024-07-25 09:04:41.241369] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:22:34.257 [2024-07-25 09:04:41.241391] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:22:34.257 [2024-07-25 09:04:41.241411] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:22:34.257 [2024-07-25 09:04:41.241438] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:22:34.257 [2024-07-25 09:04:41.241461] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:22:34.257 [2024-07-25 09:04:41.241531] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:34.257 [2024-07-25 09:04:41.241556] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:22:34.257 [2024-07-25 09:04:41.241588] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.257 [2024-07-25 09:04:41.241620] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:34.257 [2024-07-25 09:04:41.241641] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:34.257 [2024-07-25 09:04:41.241658] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:22:34.257 [2024-07-25 09:04:41.241685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:34.257 [2024-07-25 09:04:41.241781] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:22:34.257 [2024-07-25 09:04:41.241843] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:22:34.257 [2024-07-25 09:04:41.241907] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:34.257 [2024-07-25 09:04:41.241942] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:34.257 [2024-07-25 09:04:41.241962] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:34.257 [2024-07-25 09:04:41.241974] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:22:34.257 [2024-07-25 09:04:41.241993] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:34.257 [2024-07-25 09:04:41.242006] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:34.257 [2024-07-25 09:04:41.242014] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:34.257 [2024-07-25 09:04:41.242023] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:22:34.257 [2024-07-25 09:04:41.242048] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:34.257 [2024-07-25 09:04:41.242059] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:22:34.257 [2024-07-25 09:04:41.242085] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.257 [2024-07-25 09:04:41.242130] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:22:34.257 [2024-07-25 09:04:41.242193] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:34.257 [2024-07-25 09:04:41.242210] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:34.257 [2024-07-25 09:04:41.242219] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:34.257 [2024-07-25 09:04:41.242228] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:22:34.257 [2024-07-25 09:04:41.242250] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:34.257 [2024-07-25 09:04:41.242266] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:22:34.257 [2024-07-25 09:04:41.242290] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.257 [2024-07-25 09:04:41.242326] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:22:34.257 [2024-07-25 09:04:41.242394] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:34.257 [2024-07-25 09:04:41.242408] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:34.257 [2024-07-25 09:04:41.242417] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:34.257 [2024-07-25 09:04:41.242425] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:22:34.257 [2024-07-25 09:04:41.242459] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:34.257 [2024-07-25 09:04:41.242471] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:22:34.257 [2024-07-25 09:04:41.242491] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.257 [2024-07-25 09:04:41.242529] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:22:34.258 [2024-07-25 09:04:41.242591] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:34.258 [2024-07-25 09:04:41.242609] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:34.258 [2024-07-25 09:04:41.242618] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:34.258 [2024-07-25 09:04:41.242627] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:22:34.258 [2024-07-25 09:04:41.242669] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:34.258 [2024-07-25 09:04:41.242682] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:22:34.258 [2024-07-25 09:04:41.242700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.258 [2024-07-25 09:04:41.242718] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:34.258 [2024-07-25 09:04:41.242728] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:22:34.258 [2024-07-25 09:04:41.242743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.258 [2024-07-25 09:04:41.242766] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:34.258 [2024-07-25 09:04:41.242780] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x61500000f080) 00:22:34.258 [2024-07-25 09:04:41.242796] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.258 [2024-07-25 09:04:41.242842] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:34.258 [2024-07-25 09:04:41.242871] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x61500000f080) 00:22:34.258 [2024-07-25 09:04:41.242894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.258 [2024-07-25 09:04:41.242938] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:22:34.258 [2024-07-25 09:04:41.242953] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:22:34.258 [2024-07-25 09:04:41.242964] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001ba00, cid 6, qid 0 00:22:34.258 [2024-07-25 09:04:41.242973] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:22:34.258 [2024-07-25 09:04:41.243163] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:34.258 [2024-07-25 09:04:41.243191] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:34.258 [2024-07-25 09:04:41.243202] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:34.258 [2024-07-25 09:04:41.243212] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=8192, cccid=5 00:22:34.258 [2024-07-25 09:04:41.243230] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b880) on tqpair(0x61500000f080): expected_datao=0, payload_size=8192 00:22:34.258 [2024-07-25 09:04:41.243240] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:34.258 [2024-07-25 09:04:41.243279] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:34.258 [2024-07-25 09:04:41.243292] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:34.258 [2024-07-25 09:04:41.243314] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:34.258 [2024-07-25 09:04:41.243327] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:34.258 [2024-07-25 09:04:41.243335] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:34.258 [2024-07-25 09:04:41.243344] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=512, cccid=4 00:22:34.258 [2024-07-25 09:04:41.243353] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=512 00:22:34.258 [2024-07-25 09:04:41.243362] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:34.258 [2024-07-25 09:04:41.243377] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:34.258 [2024-07-25 09:04:41.243385] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:34.258 [2024-07-25 09:04:41.243403] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:34.258 [2024-07-25 09:04:41.243416] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:34.258 [2024-07-25 09:04:41.243423] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:34.258 [2024-07-25 09:04:41.243431] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=512, cccid=6 00:22:34.258 [2024-07-25 09:04:41.243441] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001ba00) on tqpair(0x61500000f080): expected_datao=0, payload_size=512 00:22:34.258 [2024-07-25 09:04:41.243450] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:34.258 [2024-07-25 09:04:41.243466] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:34.258 [2024-07-25 09:04:41.243475] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:34.258 [2024-07-25 09:04:41.243486] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:34.258 [2024-07-25 09:04:41.243497] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:34.258 [2024-07-25 09:04:41.243505] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:34.258 [2024-07-25 09:04:41.243516] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=7 00:22:34.258 [2024-07-25 09:04:41.243527] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001bb80) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:22:34.258 [2024-07-25 09:04:41.243536] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:34.258 [2024-07-25 09:04:41.243549] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:34.258 [2024-07-25 09:04:41.243557] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:34.258 [2024-07-25 09:04:41.243569] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:34.258 [2024-07-25 09:04:41.243580] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:34.258 [2024-07-25 09:04:41.243587] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:34.258 [2024-07-25 09:04:41.243597] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:22:34.258 [2024-07-25 09:04:41.243634] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:34.258 [2024-07-25 09:04:41.243674] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:34.258 [2024-07-25 09:04:41.243683] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:34.258 [2024-07-25 09:04:41.243692] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:22:34.258 [2024-07-25 09:04:41.243713] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:34.258 [2024-07-25 09:04:41.243726] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:34.258 [2024-07-25 09:04:41.243734] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:34.258 [2024-07-25 09:04:41.243742] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001ba00) on tqpair=0x61500000f080 00:22:34.258 [2024-07-25 09:04:41.243759] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:34.258 [2024-07-25 09:04:41.243771] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:34.258 [2024-07-25 09:04:41.243781] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:34.258 [2024-07-25 09:04:41.243790] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x61500000f080 00:22:34.258 ===================================================== 00:22:34.258 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:34.258 ===================================================== 00:22:34.258 Controller Capabilities/Features 00:22:34.258 ================================ 00:22:34.258 Vendor ID: 8086 00:22:34.258 Subsystem Vendor ID: 8086 00:22:34.258 Serial Number: SPDK00000000000001 00:22:34.258 Model Number: SPDK bdev Controller 00:22:34.258 Firmware Version: 24.09 00:22:34.258 Recommended Arb Burst: 6 00:22:34.258 IEEE OUI Identifier: e4 d2 5c 00:22:34.258 Multi-path I/O 00:22:34.258 May have multiple subsystem ports: Yes 00:22:34.258 May have multiple controllers: Yes 00:22:34.258 Associated with SR-IOV VF: No 00:22:34.258 Max Data Transfer Size: 131072 00:22:34.258 Max Number of Namespaces: 32 00:22:34.258 Max Number of I/O Queues: 127 00:22:34.258 NVMe Specification Version (VS): 1.3 00:22:34.258 NVMe Specification Version (Identify): 1.3 00:22:34.258 Maximum Queue Entries: 128 00:22:34.258 Contiguous Queues Required: Yes 00:22:34.258 Arbitration Mechanisms Supported 00:22:34.258 Weighted Round Robin: Not Supported 00:22:34.258 Vendor Specific: Not Supported 00:22:34.258 Reset Timeout: 15000 ms 00:22:34.258 Doorbell Stride: 4 bytes 00:22:34.258 NVM Subsystem Reset: Not Supported 00:22:34.258 Command Sets Supported 00:22:34.258 NVM Command Set: Supported 00:22:34.258 Boot Partition: Not Supported 00:22:34.258 Memory Page Size Minimum: 4096 bytes 00:22:34.258 Memory Page Size Maximum: 4096 bytes 00:22:34.258 Persistent Memory Region: Not Supported 00:22:34.258 Optional Asynchronous Events Supported 00:22:34.258 Namespace Attribute Notices: Supported 00:22:34.258 Firmware Activation Notices: Not Supported 00:22:34.258 ANA Change Notices: Not Supported 00:22:34.258 PLE Aggregate Log Change Notices: Not Supported 00:22:34.258 LBA Status Info Alert Notices: Not Supported 00:22:34.258 EGE Aggregate Log Change Notices: Not Supported 00:22:34.258 Normal NVM Subsystem Shutdown event: Not Supported 00:22:34.258 Zone Descriptor Change Notices: Not Supported 00:22:34.258 Discovery Log Change Notices: Not Supported 00:22:34.258 Controller Attributes 00:22:34.258 128-bit Host Identifier: Supported 00:22:34.258 Non-Operational Permissive Mode: Not Supported 00:22:34.259 NVM Sets: Not Supported 00:22:34.259 Read Recovery Levels: Not Supported 00:22:34.259 Endurance Groups: Not Supported 00:22:34.259 Predictable Latency Mode: Not Supported 00:22:34.259 Traffic Based Keep ALive: Not Supported 00:22:34.259 Namespace Granularity: Not Supported 00:22:34.259 SQ Associations: Not Supported 00:22:34.259 UUID List: Not Supported 00:22:34.259 Multi-Domain Subsystem: Not Supported 00:22:34.259 Fixed Capacity Management: Not Supported 00:22:34.259 Variable Capacity Management: Not Supported 00:22:34.259 Delete Endurance Group: Not Supported 00:22:34.259 Delete NVM Set: Not Supported 00:22:34.259 Extended LBA Formats Supported: Not Supported 00:22:34.259 Flexible Data Placement Supported: Not Supported 00:22:34.259 00:22:34.259 Controller Memory Buffer Support 00:22:34.259 ================================ 00:22:34.259 Supported: No 00:22:34.259 00:22:34.259 Persistent Memory Region Support 00:22:34.259 ================================ 00:22:34.259 Supported: No 00:22:34.259 00:22:34.259 Admin Command Set Attributes 00:22:34.259 ============================ 00:22:34.259 Security Send/Receive: Not Supported 00:22:34.259 Format NVM: Not Supported 00:22:34.259 Firmware Activate/Download: Not Supported 00:22:34.259 Namespace Management: Not Supported 00:22:34.259 Device Self-Test: Not Supported 00:22:34.259 Directives: Not Supported 00:22:34.259 NVMe-MI: Not Supported 00:22:34.259 Virtualization Management: Not Supported 00:22:34.259 Doorbell Buffer Config: Not Supported 00:22:34.259 Get LBA Status Capability: Not Supported 00:22:34.259 Command & Feature Lockdown Capability: Not Supported 00:22:34.259 Abort Command Limit: 4 00:22:34.259 Async Event Request Limit: 4 00:22:34.259 Number of Firmware Slots: N/A 00:22:34.259 Firmware Slot 1 Read-Only: N/A 00:22:34.259 Firmware Activation Without Reset: N/A 00:22:34.259 Multiple Update Detection Support: N/A 00:22:34.259 Firmware Update Granularity: No Information Provided 00:22:34.259 Per-Namespace SMART Log: No 00:22:34.259 Asymmetric Namespace Access Log Page: Not Supported 00:22:34.259 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:22:34.259 Command Effects Log Page: Supported 00:22:34.259 Get Log Page Extended Data: Supported 00:22:34.259 Telemetry Log Pages: Not Supported 00:22:34.259 Persistent Event Log Pages: Not Supported 00:22:34.259 Supported Log Pages Log Page: May Support 00:22:34.259 Commands Supported & Effects Log Page: Not Supported 00:22:34.259 Feature Identifiers & Effects Log Page:May Support 00:22:34.259 NVMe-MI Commands & Effects Log Page: May Support 00:22:34.259 Data Area 4 for Telemetry Log: Not Supported 00:22:34.259 Error Log Page Entries Supported: 128 00:22:34.259 Keep Alive: Supported 00:22:34.259 Keep Alive Granularity: 10000 ms 00:22:34.259 00:22:34.259 NVM Command Set Attributes 00:22:34.259 ========================== 00:22:34.259 Submission Queue Entry Size 00:22:34.259 Max: 64 00:22:34.259 Min: 64 00:22:34.259 Completion Queue Entry Size 00:22:34.259 Max: 16 00:22:34.259 Min: 16 00:22:34.259 Number of Namespaces: 32 00:22:34.259 Compare Command: Supported 00:22:34.259 Write Uncorrectable Command: Not Supported 00:22:34.259 Dataset Management Command: Supported 00:22:34.259 Write Zeroes Command: Supported 00:22:34.259 Set Features Save Field: Not Supported 00:22:34.259 Reservations: Supported 00:22:34.259 Timestamp: Not Supported 00:22:34.259 Copy: Supported 00:22:34.259 Volatile Write Cache: Present 00:22:34.259 Atomic Write Unit (Normal): 1 00:22:34.259 Atomic Write Unit (PFail): 1 00:22:34.259 Atomic Compare & Write Unit: 1 00:22:34.259 Fused Compare & Write: Supported 00:22:34.259 Scatter-Gather List 00:22:34.259 SGL Command Set: Supported 00:22:34.259 SGL Keyed: Supported 00:22:34.259 SGL Bit Bucket Descriptor: Not Supported 00:22:34.259 SGL Metadata Pointer: Not Supported 00:22:34.259 Oversized SGL: Not Supported 00:22:34.259 SGL Metadata Address: Not Supported 00:22:34.259 SGL Offset: Supported 00:22:34.259 Transport SGL Data Block: Not Supported 00:22:34.259 Replay Protected Memory Block: Not Supported 00:22:34.259 00:22:34.259 Firmware Slot Information 00:22:34.259 ========================= 00:22:34.259 Active slot: 1 00:22:34.259 Slot 1 Firmware Revision: 24.09 00:22:34.259 00:22:34.259 00:22:34.259 Commands Supported and Effects 00:22:34.259 ============================== 00:22:34.259 Admin Commands 00:22:34.259 -------------- 00:22:34.259 Get Log Page (02h): Supported 00:22:34.259 Identify (06h): Supported 00:22:34.259 Abort (08h): Supported 00:22:34.259 Set Features (09h): Supported 00:22:34.259 Get Features (0Ah): Supported 00:22:34.259 Asynchronous Event Request (0Ch): Supported 00:22:34.259 Keep Alive (18h): Supported 00:22:34.259 I/O Commands 00:22:34.259 ------------ 00:22:34.259 Flush (00h): Supported LBA-Change 00:22:34.259 Write (01h): Supported LBA-Change 00:22:34.259 Read (02h): Supported 00:22:34.259 Compare (05h): Supported 00:22:34.259 Write Zeroes (08h): Supported LBA-Change 00:22:34.259 Dataset Management (09h): Supported LBA-Change 00:22:34.259 Copy (19h): Supported LBA-Change 00:22:34.259 00:22:34.259 Error Log 00:22:34.259 ========= 00:22:34.259 00:22:34.259 Arbitration 00:22:34.259 =========== 00:22:34.259 Arbitration Burst: 1 00:22:34.259 00:22:34.259 Power Management 00:22:34.259 ================ 00:22:34.259 Number of Power States: 1 00:22:34.259 Current Power State: Power State #0 00:22:34.259 Power State #0: 00:22:34.259 Max Power: 0.00 W 00:22:34.259 Non-Operational State: Operational 00:22:34.259 Entry Latency: Not Reported 00:22:34.259 Exit Latency: Not Reported 00:22:34.259 Relative Read Throughput: 0 00:22:34.259 Relative Read Latency: 0 00:22:34.259 Relative Write Throughput: 0 00:22:34.259 Relative Write Latency: 0 00:22:34.259 Idle Power: Not Reported 00:22:34.259 Active Power: Not Reported 00:22:34.259 Non-Operational Permissive Mode: Not Supported 00:22:34.259 00:22:34.259 Health Information 00:22:34.259 ================== 00:22:34.259 Critical Warnings: 00:22:34.259 Available Spare Space: OK 00:22:34.259 Temperature: OK 00:22:34.259 Device Reliability: OK 00:22:34.259 Read Only: No 00:22:34.259 Volatile Memory Backup: OK 00:22:34.259 Current Temperature: 0 Kelvin (-273 Celsius) 00:22:34.259 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:22:34.259 Available Spare: 0% 00:22:34.259 Available Spare Threshold: 0% 00:22:34.259 Life Percentage Used:[2024-07-25 09:04:41.248123] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:34.259 [2024-07-25 09:04:41.248154] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x61500000f080) 00:22:34.259 [2024-07-25 09:04:41.248178] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.259 [2024-07-25 09:04:41.248229] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:22:34.259 [2024-07-25 09:04:41.248317] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:34.259 [2024-07-25 09:04:41.248356] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:34.259 [2024-07-25 09:04:41.248368] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:34.259 [2024-07-25 09:04:41.248379] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x61500000f080 00:22:34.259 [2024-07-25 09:04:41.248480] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:22:34.259 [2024-07-25 09:04:41.248525] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:22:34.259 [2024-07-25 09:04:41.248544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.259 [2024-07-25 09:04:41.248558] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x61500000f080 00:22:34.259 [2024-07-25 09:04:41.248570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.260 [2024-07-25 09:04:41.248581] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x61500000f080 00:22:34.260 [2024-07-25 09:04:41.248607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.260 [2024-07-25 09:04:41.248618] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:34.260 [2024-07-25 09:04:41.248629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.260 [2024-07-25 09:04:41.248649] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:34.260 [2024-07-25 09:04:41.248665] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:34.260 [2024-07-25 09:04:41.248676] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:34.260 [2024-07-25 09:04:41.248695] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.260 [2024-07-25 09:04:41.248749] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:34.260 [2024-07-25 09:04:41.248843] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:34.260 [2024-07-25 09:04:41.248878] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:34.260 [2024-07-25 09:04:41.248890] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:34.260 [2024-07-25 09:04:41.248900] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:34.260 [2024-07-25 09:04:41.248926] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:34.260 [2024-07-25 09:04:41.248938] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:34.260 [2024-07-25 09:04:41.248948] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:34.260 [2024-07-25 09:04:41.248966] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.260 [2024-07-25 09:04:41.249018] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:34.260 [2024-07-25 09:04:41.249122] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:34.260 [2024-07-25 09:04:41.249154] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:34.260 [2024-07-25 09:04:41.249164] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:34.260 [2024-07-25 09:04:41.249174] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:34.260 [2024-07-25 09:04:41.249186] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:22:34.260 [2024-07-25 09:04:41.249197] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:22:34.260 [2024-07-25 09:04:41.249220] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:34.260 [2024-07-25 09:04:41.249232] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:34.260 [2024-07-25 09:04:41.249241] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:34.260 [2024-07-25 09:04:41.249259] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.260 [2024-07-25 09:04:41.249295] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:34.260 [2024-07-25 09:04:41.249366] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:34.260 [2024-07-25 09:04:41.249388] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:34.260 [2024-07-25 09:04:41.249398] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:34.260 [2024-07-25 09:04:41.249407] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:34.260 [2024-07-25 09:04:41.249431] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:34.260 [2024-07-25 09:04:41.249442] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:34.260 [2024-07-25 09:04:41.249450] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:34.260 [2024-07-25 09:04:41.249467] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.260 [2024-07-25 09:04:41.249500] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:34.260 [2024-07-25 09:04:41.249578] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:34.260 [2024-07-25 09:04:41.249594] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:34.260 [2024-07-25 09:04:41.249602] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:34.260 [2024-07-25 09:04:41.249611] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:34.260 [2024-07-25 09:04:41.249634] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:34.260 [2024-07-25 09:04:41.249644] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:34.260 [2024-07-25 09:04:41.249653] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:34.260 [2024-07-25 09:04:41.249669] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.260 [2024-07-25 09:04:41.249711] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:34.260 [2024-07-25 09:04:41.249768] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:34.260 [2024-07-25 09:04:41.249793] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:34.260 [2024-07-25 09:04:41.249803] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:34.260 [2024-07-25 09:04:41.249832] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:34.260 [2024-07-25 09:04:41.249871] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:34.260 [2024-07-25 09:04:41.249883] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:34.260 [2024-07-25 09:04:41.249892] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:34.260 [2024-07-25 09:04:41.249918] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.260 [2024-07-25 09:04:41.249958] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:34.260 [2024-07-25 09:04:41.250025] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:34.260 [2024-07-25 09:04:41.250047] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:34.260 [2024-07-25 09:04:41.250057] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:34.260 [2024-07-25 09:04:41.250067] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:34.260 [2024-07-25 09:04:41.250090] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:34.260 [2024-07-25 09:04:41.250107] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:34.260 [2024-07-25 09:04:41.250117] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:34.260 [2024-07-25 09:04:41.250133] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.260 [2024-07-25 09:04:41.250167] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:34.260 [2024-07-25 09:04:41.250237] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:34.260 [2024-07-25 09:04:41.250259] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:34.260 [2024-07-25 09:04:41.250268] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:34.260 [2024-07-25 09:04:41.250278] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:34.260 [2024-07-25 09:04:41.250305] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:34.260 [2024-07-25 09:04:41.250316] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:34.260 [2024-07-25 09:04:41.250324] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:34.260 [2024-07-25 09:04:41.250340] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.260 [2024-07-25 09:04:41.250373] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:34.260 [2024-07-25 09:04:41.250443] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:34.260 [2024-07-25 09:04:41.250465] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:34.260 [2024-07-25 09:04:41.250478] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:34.260 [2024-07-25 09:04:41.250489] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:34.260 [2024-07-25 09:04:41.250512] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:34.260 [2024-07-25 09:04:41.250522] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:34.261 [2024-07-25 09:04:41.250530] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:34.261 [2024-07-25 09:04:41.250546] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.261 [2024-07-25 09:04:41.250579] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:34.261 [2024-07-25 09:04:41.250646] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:34.261 [2024-07-25 09:04:41.250673] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:34.261 [2024-07-25 09:04:41.250684] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:34.261 [2024-07-25 09:04:41.250693] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:34.261 [2024-07-25 09:04:41.250716] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:34.261 [2024-07-25 09:04:41.250726] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:34.261 [2024-07-25 09:04:41.250735] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:34.261 [2024-07-25 09:04:41.250750] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.261 [2024-07-25 09:04:41.250783] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:34.261 [2024-07-25 09:04:41.250859] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:34.261 [2024-07-25 09:04:41.250886] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:34.261 [2024-07-25 09:04:41.250909] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:34.261 [2024-07-25 09:04:41.250922] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:34.261 [2024-07-25 09:04:41.250949] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:34.261 [2024-07-25 09:04:41.250961] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:34.261 [2024-07-25 09:04:41.250970] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:34.261 [2024-07-25 09:04:41.250987] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.261 [2024-07-25 09:04:41.251026] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:34.261 [2024-07-25 09:04:41.251100] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:34.261 [2024-07-25 09:04:41.251119] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:34.261 [2024-07-25 09:04:41.251128] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:34.261 [2024-07-25 09:04:41.251137] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:34.261 [2024-07-25 09:04:41.251159] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:34.261 [2024-07-25 09:04:41.251171] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:34.261 [2024-07-25 09:04:41.251180] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:34.261 [2024-07-25 09:04:41.251196] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.261 [2024-07-25 09:04:41.251228] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:34.261 [2024-07-25 09:04:41.251295] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:34.261 [2024-07-25 09:04:41.251318] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:34.261 [2024-07-25 09:04:41.251327] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:34.261 [2024-07-25 09:04:41.251337] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:34.261 [2024-07-25 09:04:41.251360] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:34.261 [2024-07-25 09:04:41.251370] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:34.261 [2024-07-25 09:04:41.251379] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:34.261 [2024-07-25 09:04:41.251395] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.261 [2024-07-25 09:04:41.251428] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:34.261 [2024-07-25 09:04:41.251497] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:34.261 [2024-07-25 09:04:41.251512] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:34.261 [2024-07-25 09:04:41.251520] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:34.261 [2024-07-25 09:04:41.251529] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:34.261 [2024-07-25 09:04:41.251550] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:34.261 [2024-07-25 09:04:41.251560] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:34.261 [2024-07-25 09:04:41.251568] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:34.261 [2024-07-25 09:04:41.251592] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.261 [2024-07-25 09:04:41.251625] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:34.261 [2024-07-25 09:04:41.251690] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:34.261 [2024-07-25 09:04:41.251728] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:34.261 [2024-07-25 09:04:41.251738] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:34.261 [2024-07-25 09:04:41.251747] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:34.261 [2024-07-25 09:04:41.251771] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:34.261 [2024-07-25 09:04:41.251782] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:34.261 [2024-07-25 09:04:41.251790] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:34.261 [2024-07-25 09:04:41.251806] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.261 [2024-07-25 09:04:41.255926] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:34.261 [2024-07-25 09:04:41.256013] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:34.261 [2024-07-25 09:04:41.256043] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:34.261 [2024-07-25 09:04:41.256058] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:34.261 [2024-07-25 09:04:41.256075] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:34.261 [2024-07-25 09:04:41.256107] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:22:34.261 0% 00:22:34.261 Data Units Read: 0 00:22:34.261 Data Units Written: 0 00:22:34.261 Host Read Commands: 0 00:22:34.261 Host Write Commands: 0 00:22:34.261 Controller Busy Time: 0 minutes 00:22:34.261 Power Cycles: 0 00:22:34.261 Power On Hours: 0 hours 00:22:34.261 Unsafe Shutdowns: 0 00:22:34.261 Unrecoverable Media Errors: 0 00:22:34.261 Lifetime Error Log Entries: 0 00:22:34.261 Warning Temperature Time: 0 minutes 00:22:34.261 Critical Temperature Time: 0 minutes 00:22:34.261 00:22:34.261 Number of Queues 00:22:34.261 ================ 00:22:34.261 Number of I/O Submission Queues: 127 00:22:34.261 Number of I/O Completion Queues: 127 00:22:34.261 00:22:34.261 Active Namespaces 00:22:34.261 ================= 00:22:34.261 Namespace ID:1 00:22:34.261 Error Recovery Timeout: Unlimited 00:22:34.261 Command Set Identifier: NVM (00h) 00:22:34.261 Deallocate: Supported 00:22:34.261 Deallocated/Unwritten Error: Not Supported 00:22:34.261 Deallocated Read Value: Unknown 00:22:34.261 Deallocate in Write Zeroes: Not Supported 00:22:34.261 Deallocated Guard Field: 0xFFFF 00:22:34.261 Flush: Supported 00:22:34.261 Reservation: Supported 00:22:34.261 Namespace Sharing Capabilities: Multiple Controllers 00:22:34.261 Size (in LBAs): 131072 (0GiB) 00:22:34.261 Capacity (in LBAs): 131072 (0GiB) 00:22:34.261 Utilization (in LBAs): 131072 (0GiB) 00:22:34.261 NGUID: ABCDEF0123456789ABCDEF0123456789 00:22:34.261 EUI64: ABCDEF0123456789 00:22:34.261 UUID: 1b9225fd-ba89-4072-aab8-fd196194e0f5 00:22:34.261 Thin Provisioning: Not Supported 00:22:34.261 Per-NS Atomic Units: Yes 00:22:34.261 Atomic Boundary Size (Normal): 0 00:22:34.261 Atomic Boundary Size (PFail): 0 00:22:34.261 Atomic Boundary Offset: 0 00:22:34.261 Maximum Single Source Range Length: 65535 00:22:34.261 Maximum Copy Length: 65535 00:22:34.261 Maximum Source Range Count: 1 00:22:34.261 NGUID/EUI64 Never Reused: No 00:22:34.261 Namespace Write Protected: No 00:22:34.261 Number of LBA Formats: 1 00:22:34.261 Current LBA Format: LBA Format #00 00:22:34.261 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:34.261 00:22:34.261 09:04:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:22:34.520 09:04:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:34.520 09:04:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.520 09:04:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:34.520 09:04:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.520 09:04:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:22:34.520 09:04:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:22:34.520 09:04:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:34.520 09:04:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:22:34.520 09:04:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:34.520 09:04:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:22:34.520 09:04:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:34.520 09:04:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:34.520 rmmod nvme_tcp 00:22:34.520 rmmod nvme_fabrics 00:22:34.520 rmmod nvme_keyring 00:22:34.520 09:04:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:34.520 09:04:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:22:34.520 09:04:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:22:34.520 09:04:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 79726 ']' 00:22:34.520 09:04:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 79726 00:22:34.520 09:04:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 79726 ']' 00:22:34.520 09:04:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 79726 00:22:34.520 09:04:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:22:34.520 09:04:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:34.520 09:04:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79726 00:22:34.520 09:04:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:34.520 09:04:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:34.520 killing process with pid 79726 00:22:34.520 09:04:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79726' 00:22:34.520 09:04:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 79726 00:22:34.520 09:04:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 79726 00:22:35.896 09:04:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:35.896 09:04:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:35.896 09:04:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:35.896 09:04:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:35.896 09:04:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:35.896 09:04:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:35.896 09:04:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:35.896 09:04:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:35.896 09:04:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:35.896 00:22:35.896 real 0m3.993s 00:22:35.896 user 0m10.527s 00:22:35.896 sys 0m0.926s 00:22:35.896 09:04:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:35.896 ************************************ 00:22:35.896 END TEST nvmf_identify 00:22:35.896 09:04:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:35.896 ************************************ 00:22:35.896 09:04:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:35.896 09:04:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:35.896 09:04:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:35.896 09:04:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.896 ************************************ 00:22:35.896 START TEST nvmf_perf 00:22:35.896 ************************************ 00:22:35.896 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:36.156 * Looking for test storage... 00:22:36.156 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:36.156 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:36.156 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:22:36.156 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:36.156 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:36.156 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:36.156 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:36.156 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:36.156 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:36.156 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:36.156 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:36.156 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:36.156 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:36.156 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:22:36.156 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:22:36.156 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:36.156 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:36.156 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:36.156 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:36.156 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:36.156 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:36.156 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:36.156 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:36.156 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.156 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.156 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.156 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:22:36.157 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.157 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:22:36.157 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:36.157 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:36.157 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:36.157 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:36.157 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:36.157 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:36.157 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:36.157 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:36.157 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:36.157 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:36.157 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:36.157 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:22:36.157 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:36.157 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:36.157 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:36.157 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:36.157 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:36.157 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:36.157 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:36.157 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:36.157 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:22:36.157 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:22:36.157 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:22:36.157 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:22:36.157 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:22:36.157 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # nvmf_veth_init 00:22:36.157 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:36.157 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:36.157 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:36.157 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:36.157 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:36.157 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:36.157 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:36.157 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:36.157 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:36.157 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:36.157 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:36.157 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:36.157 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:36.157 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:36.157 Cannot find device "nvmf_tgt_br" 00:22:36.157 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # true 00:22:36.157 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:36.157 Cannot find device "nvmf_tgt_br2" 00:22:36.157 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # true 00:22:36.157 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:36.157 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:36.157 Cannot find device "nvmf_tgt_br" 00:22:36.157 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # true 00:22:36.157 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:36.157 Cannot find device "nvmf_tgt_br2" 00:22:36.157 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # true 00:22:36.157 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:36.157 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:36.157 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:36.157 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:36.157 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:22:36.157 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:36.157 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:36.157 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:22:36.157 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:36.157 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:36.157 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:36.417 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:36.417 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:36.417 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:36.417 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:36.417 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:36.418 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:36.418 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:36.418 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:36.418 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:36.418 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:36.418 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:36.418 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:36.418 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:36.418 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:36.418 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:36.418 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:36.418 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:36.418 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:36.418 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:36.418 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:36.418 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:36.418 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:36.418 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:22:36.418 00:22:36.418 --- 10.0.0.2 ping statistics --- 00:22:36.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:36.418 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:22:36.418 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:36.418 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:36.418 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:22:36.418 00:22:36.418 --- 10.0.0.3 ping statistics --- 00:22:36.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:36.418 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:22:36.418 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:36.418 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:36.418 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:22:36.418 00:22:36.418 --- 10.0.0.1 ping statistics --- 00:22:36.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:36.418 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:22:36.418 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:36.418 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@433 -- # return 0 00:22:36.418 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:36.418 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:36.418 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:36.418 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:36.418 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:36.418 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:36.418 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:36.418 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:22:36.418 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:36.418 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:36.418 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:36.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:36.418 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=79951 00:22:36.418 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:36.418 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 79951 00:22:36.418 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 79951 ']' 00:22:36.418 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:36.418 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:36.418 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:36.418 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:36.418 09:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:36.676 [2024-07-25 09:04:43.603706] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:36.676 [2024-07-25 09:04:43.603883] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:36.676 [2024-07-25 09:04:43.782935] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:37.243 [2024-07-25 09:04:44.065166] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:37.243 [2024-07-25 09:04:44.065254] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:37.243 [2024-07-25 09:04:44.065273] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:37.243 [2024-07-25 09:04:44.065289] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:37.243 [2024-07-25 09:04:44.065304] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:37.243 [2024-07-25 09:04:44.065522] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:37.243 [2024-07-25 09:04:44.065834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:37.243 [2024-07-25 09:04:44.065961] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:37.243 [2024-07-25 09:04:44.065968] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:37.243 [2024-07-25 09:04:44.273308] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:22:37.502 09:04:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:37.502 09:04:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:22:37.502 09:04:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:37.502 09:04:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:37.502 09:04:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:37.502 09:04:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:37.502 09:04:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:22:37.502 09:04:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:22:38.135 09:04:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:22:38.135 09:04:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:22:38.135 09:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:22:38.135 09:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:38.393 09:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:22:38.393 09:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:22:38.393 09:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:22:38.393 09:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:22:38.393 09:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:38.652 [2024-07-25 09:04:45.756918] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:38.910 09:04:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:39.169 09:04:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:39.169 09:04:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:39.428 09:04:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:39.428 09:04:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:39.686 09:04:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:39.945 [2024-07-25 09:04:46.811211] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:39.945 09:04:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:40.221 09:04:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:22:40.221 09:04:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:22:40.221 09:04:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:22:40.221 09:04:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:22:41.173 Initializing NVMe Controllers 00:22:41.173 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:22:41.173 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:22:41.173 Initialization complete. Launching workers. 00:22:41.173 ======================================================== 00:22:41.173 Latency(us) 00:22:41.173 Device Information : IOPS MiB/s Average min max 00:22:41.173 PCIE (0000:00:10.0) NSID 1 from core 0: 23173.60 90.52 1381.20 360.11 8710.15 00:22:41.173 ======================================================== 00:22:41.173 Total : 23173.60 90.52 1381.20 360.11 8710.15 00:22:41.173 00:22:41.173 09:04:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:42.547 Initializing NVMe Controllers 00:22:42.547 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:42.547 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:42.547 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:42.547 Initialization complete. Launching workers. 00:22:42.547 ======================================================== 00:22:42.547 Latency(us) 00:22:42.547 Device Information : IOPS MiB/s Average min max 00:22:42.547 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2644.80 10.33 377.53 149.93 7182.79 00:22:42.547 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 123.99 0.48 8127.57 4941.63 12054.85 00:22:42.547 ======================================================== 00:22:42.547 Total : 2768.79 10.82 724.59 149.93 12054.85 00:22:42.547 00:22:42.806 09:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:44.181 Initializing NVMe Controllers 00:22:44.181 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:44.181 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:44.181 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:44.181 Initialization complete. Launching workers. 00:22:44.181 ======================================================== 00:22:44.181 Latency(us) 00:22:44.181 Device Information : IOPS MiB/s Average min max 00:22:44.181 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6228.10 24.33 5138.62 784.84 10877.04 00:22:44.181 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3996.78 15.61 8017.36 4550.85 14481.34 00:22:44.181 ======================================================== 00:22:44.181 Total : 10224.88 39.94 6263.88 784.84 14481.34 00:22:44.181 00:22:44.181 09:04:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:22:44.181 09:04:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:47.468 Initializing NVMe Controllers 00:22:47.468 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:47.468 Controller IO queue size 128, less than required. 00:22:47.468 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:47.468 Controller IO queue size 128, less than required. 00:22:47.468 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:47.468 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:47.468 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:47.468 Initialization complete. Launching workers. 00:22:47.468 ======================================================== 00:22:47.468 Latency(us) 00:22:47.468 Device Information : IOPS MiB/s Average min max 00:22:47.468 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1186.37 296.59 112472.03 69715.54 290788.27 00:22:47.468 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 547.55 136.89 249786.77 103105.16 490475.95 00:22:47.468 ======================================================== 00:22:47.468 Total : 1733.92 433.48 155834.58 69715.54 490475.95 00:22:47.468 00:22:47.468 09:04:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:22:47.468 Initializing NVMe Controllers 00:22:47.468 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:47.468 Controller IO queue size 128, less than required. 00:22:47.468 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:47.468 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:22:47.468 Controller IO queue size 128, less than required. 00:22:47.468 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:47.468 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:22:47.468 WARNING: Some requested NVMe devices were skipped 00:22:47.469 No valid NVMe controllers or AIO or URING devices found 00:22:47.469 09:04:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:22:50.776 Initializing NVMe Controllers 00:22:50.776 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:50.776 Controller IO queue size 128, less than required. 00:22:50.776 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:50.776 Controller IO queue size 128, less than required. 00:22:50.776 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:50.776 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:50.776 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:50.776 Initialization complete. Launching workers. 00:22:50.776 00:22:50.776 ==================== 00:22:50.776 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:22:50.776 TCP transport: 00:22:50.776 polls: 4942 00:22:50.776 idle_polls: 2629 00:22:50.776 sock_completions: 2313 00:22:50.776 nvme_completions: 4491 00:22:50.776 submitted_requests: 6850 00:22:50.776 queued_requests: 1 00:22:50.776 00:22:50.776 ==================== 00:22:50.776 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:22:50.776 TCP transport: 00:22:50.776 polls: 5681 00:22:50.776 idle_polls: 3186 00:22:50.776 sock_completions: 2495 00:22:50.776 nvme_completions: 4831 00:22:50.776 submitted_requests: 7336 00:22:50.776 queued_requests: 1 00:22:50.776 ======================================================== 00:22:50.776 Latency(us) 00:22:50.776 Device Information : IOPS MiB/s Average min max 00:22:50.776 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1122.40 280.60 123882.84 53512.94 436211.82 00:22:50.776 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1207.40 301.85 105608.77 45519.40 299883.90 00:22:50.777 ======================================================== 00:22:50.777 Total : 2329.80 582.45 114412.48 45519.40 436211.82 00:22:50.777 00:22:50.777 09:04:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:22:50.777 09:04:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:50.777 09:04:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:22:50.777 09:04:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:00:10.0 ']' 00:22:50.777 09:04:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:22:51.035 09:04:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=30e7f634-c7b1-4821-9c94-75de87fa9e62 00:22:51.035 09:04:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 30e7f634-c7b1-4821-9c94-75de87fa9e62 00:22:51.035 09:04:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=30e7f634-c7b1-4821-9c94-75de87fa9e62 00:22:51.035 09:04:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:22:51.035 09:04:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:22:51.035 09:04:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:22:51.035 09:04:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:51.293 09:04:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:22:51.293 { 00:22:51.293 "uuid": "30e7f634-c7b1-4821-9c94-75de87fa9e62", 00:22:51.293 "name": "lvs_0", 00:22:51.293 "base_bdev": "Nvme0n1", 00:22:51.293 "total_data_clusters": 1278, 00:22:51.293 "free_clusters": 1278, 00:22:51.293 "block_size": 4096, 00:22:51.293 "cluster_size": 4194304 00:22:51.293 } 00:22:51.293 ]' 00:22:51.293 09:04:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="30e7f634-c7b1-4821-9c94-75de87fa9e62") .free_clusters' 00:22:51.293 09:04:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=1278 00:22:51.293 09:04:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="30e7f634-c7b1-4821-9c94-75de87fa9e62") .cluster_size' 00:22:51.293 5112 00:22:51.293 09:04:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:22:51.293 09:04:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=5112 00:22:51.293 09:04:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 5112 00:22:51.293 09:04:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:22:51.293 09:04:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 30e7f634-c7b1-4821-9c94-75de87fa9e62 lbd_0 5112 00:22:51.552 09:04:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=a1c49e11-4541-48f1-80e2-f40115fef57f 00:22:51.552 09:04:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore a1c49e11-4541-48f1-80e2-f40115fef57f lvs_n_0 00:22:52.119 09:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=66360309-3a05-4ce1-aa60-ebb1213a8b16 00:22:52.119 09:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 66360309-3a05-4ce1-aa60-ebb1213a8b16 00:22:52.119 09:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=66360309-3a05-4ce1-aa60-ebb1213a8b16 00:22:52.119 09:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:22:52.119 09:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:22:52.119 09:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:22:52.119 09:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:52.378 09:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:22:52.378 { 00:22:52.378 "uuid": "30e7f634-c7b1-4821-9c94-75de87fa9e62", 00:22:52.378 "name": "lvs_0", 00:22:52.378 "base_bdev": "Nvme0n1", 00:22:52.378 "total_data_clusters": 1278, 00:22:52.378 "free_clusters": 0, 00:22:52.378 "block_size": 4096, 00:22:52.378 "cluster_size": 4194304 00:22:52.378 }, 00:22:52.378 { 00:22:52.378 "uuid": "66360309-3a05-4ce1-aa60-ebb1213a8b16", 00:22:52.378 "name": "lvs_n_0", 00:22:52.378 "base_bdev": "a1c49e11-4541-48f1-80e2-f40115fef57f", 00:22:52.378 "total_data_clusters": 1276, 00:22:52.378 "free_clusters": 1276, 00:22:52.378 "block_size": 4096, 00:22:52.378 "cluster_size": 4194304 00:22:52.378 } 00:22:52.378 ]' 00:22:52.378 09:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="66360309-3a05-4ce1-aa60-ebb1213a8b16") .free_clusters' 00:22:52.378 09:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=1276 00:22:52.378 09:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="66360309-3a05-4ce1-aa60-ebb1213a8b16") .cluster_size' 00:22:52.378 5104 00:22:52.378 09:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:22:52.378 09:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=5104 00:22:52.378 09:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 5104 00:22:52.378 09:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:22:52.378 09:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 66360309-3a05-4ce1-aa60-ebb1213a8b16 lbd_nest_0 5104 00:22:52.637 09:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=a732b9ab-91e7-402d-9c89-13df12de1ecb 00:22:52.637 09:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:52.894 09:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:22:52.895 09:04:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 a732b9ab-91e7-402d-9c89-13df12de1ecb 00:22:53.153 09:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:53.411 09:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:22:53.411 09:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:22:53.411 09:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:22:53.411 09:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:22:53.411 09:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:53.978 Initializing NVMe Controllers 00:22:53.979 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:53.979 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:22:53.979 WARNING: Some requested NVMe devices were skipped 00:22:53.979 No valid NVMe controllers or AIO or URING devices found 00:22:53.979 09:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:22:53.979 09:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:06.216 Initializing NVMe Controllers 00:23:06.216 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:06.216 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:06.216 Initialization complete. Launching workers. 00:23:06.216 ======================================================== 00:23:06.216 Latency(us) 00:23:06.216 Device Information : IOPS MiB/s Average min max 00:23:06.216 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 744.68 93.09 1342.06 466.01 7956.08 00:23:06.216 ======================================================== 00:23:06.216 Total : 744.68 93.09 1342.06 466.01 7956.08 00:23:06.216 00:23:06.216 09:05:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:23:06.216 09:05:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:23:06.216 09:05:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:06.216 Initializing NVMe Controllers 00:23:06.216 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:06.217 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:23:06.217 WARNING: Some requested NVMe devices were skipped 00:23:06.217 No valid NVMe controllers or AIO or URING devices found 00:23:06.217 09:05:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:23:06.217 09:05:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:16.188 Initializing NVMe Controllers 00:23:16.188 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:16.188 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:16.188 Initialization complete. Launching workers. 00:23:16.188 ======================================================== 00:23:16.188 Latency(us) 00:23:16.188 Device Information : IOPS MiB/s Average min max 00:23:16.188 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1354.75 169.34 23646.92 7939.04 59834.48 00:23:16.188 ======================================================== 00:23:16.188 Total : 1354.75 169.34 23646.92 7939.04 59834.48 00:23:16.188 00:23:16.188 09:05:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:23:16.188 09:05:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:23:16.188 09:05:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:16.188 Initializing NVMe Controllers 00:23:16.188 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:16.188 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:23:16.188 WARNING: Some requested NVMe devices were skipped 00:23:16.188 No valid NVMe controllers or AIO or URING devices found 00:23:16.188 09:05:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:23:16.188 09:05:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:26.159 Initializing NVMe Controllers 00:23:26.159 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:26.159 Controller IO queue size 128, less than required. 00:23:26.159 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:26.159 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:26.159 Initialization complete. Launching workers. 00:23:26.159 ======================================================== 00:23:26.159 Latency(us) 00:23:26.159 Device Information : IOPS MiB/s Average min max 00:23:26.159 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3162.90 395.36 40550.53 7949.48 98848.88 00:23:26.159 ======================================================== 00:23:26.159 Total : 3162.90 395.36 40550.53 7949.48 98848.88 00:23:26.159 00:23:26.159 09:05:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:26.416 09:05:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete a732b9ab-91e7-402d-9c89-13df12de1ecb 00:23:26.674 09:05:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:23:27.241 09:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete a1c49e11-4541-48f1-80e2-f40115fef57f 00:23:27.499 09:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:23:27.499 09:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:23:27.499 09:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:23:27.499 09:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:27.499 09:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:23:27.759 09:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:27.759 09:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:23:27.759 09:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:27.759 09:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:27.759 rmmod nvme_tcp 00:23:27.759 rmmod nvme_fabrics 00:23:27.759 rmmod nvme_keyring 00:23:27.759 09:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:27.759 09:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:23:27.759 09:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:23:27.759 09:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 79951 ']' 00:23:27.759 09:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 79951 00:23:27.759 09:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 79951 ']' 00:23:27.759 09:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 79951 00:23:27.759 09:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:23:27.759 09:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:27.759 09:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79951 00:23:27.759 killing process with pid 79951 00:23:27.759 09:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:27.759 09:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:27.759 09:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79951' 00:23:27.759 09:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 79951 00:23:27.759 09:05:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 79951 00:23:30.301 09:05:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:30.301 09:05:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:30.301 09:05:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:30.301 09:05:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:30.301 09:05:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:30.301 09:05:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:30.301 09:05:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:30.301 09:05:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:30.301 09:05:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:30.301 00:23:30.301 real 0m53.850s 00:23:30.301 user 3m22.124s 00:23:30.301 sys 0m12.578s 00:23:30.301 09:05:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:30.301 09:05:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:30.301 ************************************ 00:23:30.301 END TEST nvmf_perf 00:23:30.301 ************************************ 00:23:30.301 09:05:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:30.301 09:05:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:30.301 09:05:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:30.301 09:05:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.301 ************************************ 00:23:30.301 START TEST nvmf_fio_host 00:23:30.301 ************************************ 00:23:30.301 09:05:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:30.301 * Looking for test storage... 00:23:30.301 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:30.301 09:05:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:30.301 09:05:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:30.301 09:05:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:30.301 09:05:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:30.302 09:05:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.302 09:05:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.302 09:05:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.302 09:05:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:30.302 09:05:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.302 09:05:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:30.302 09:05:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:23:30.302 09:05:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:30.302 09:05:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:30.302 09:05:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:30.302 09:05:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:30.302 09:05:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:30.302 09:05:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:30.302 09:05:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:30.302 09:05:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:30.302 09:05:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:30.302 09:05:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:30.302 09:05:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:23:30.302 09:05:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:23:30.302 09:05:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:30.302 09:05:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:30.302 09:05:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:30.302 09:05:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:30.302 09:05:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:30.302 09:05:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:30.302 09:05:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:30.302 09:05:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:30.302 09:05:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.302 09:05:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.302 09:05:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.302 09:05:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:30.302 09:05:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.302 09:05:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:23:30.302 09:05:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:30.302 09:05:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:30.302 09:05:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:30.302 09:05:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:30.302 09:05:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:30.302 09:05:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:30.302 09:05:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:30.302 09:05:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:30.302 09:05:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:30.302 09:05:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:23:30.302 09:05:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:30.302 09:05:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:30.302 09:05:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:30.302 09:05:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:30.302 09:05:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:30.302 09:05:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:30.302 09:05:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:30.302 09:05:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:30.302 09:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:23:30.302 09:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:23:30.302 09:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:23:30.302 09:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:23:30.302 09:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:23:30.302 09:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:23:30.302 09:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:30.302 09:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:30.302 09:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:30.302 09:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:30.302 09:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:30.302 09:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:30.302 09:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:30.302 09:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:30.302 09:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:30.302 09:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:30.302 09:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:30.303 09:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:30.303 09:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:30.303 09:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:30.303 Cannot find device "nvmf_tgt_br" 00:23:30.303 09:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # true 00:23:30.303 09:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:30.303 Cannot find device "nvmf_tgt_br2" 00:23:30.303 09:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # true 00:23:30.303 09:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:30.303 09:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:30.303 Cannot find device "nvmf_tgt_br" 00:23:30.303 09:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # true 00:23:30.303 09:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:30.303 Cannot find device "nvmf_tgt_br2" 00:23:30.303 09:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # true 00:23:30.303 09:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:30.303 09:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:30.303 09:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:30.303 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:30.303 09:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:23:30.303 09:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:30.303 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:30.303 09:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:23:30.303 09:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:30.303 09:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:30.303 09:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:30.303 09:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:30.303 09:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:30.303 09:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:30.303 09:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:30.303 09:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:30.303 09:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:30.303 09:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:30.303 09:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:30.303 09:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:30.303 09:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:30.303 09:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:30.303 09:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:30.303 09:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:30.303 09:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:30.303 09:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:30.303 09:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:30.303 09:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:30.303 09:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:30.303 09:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:30.303 09:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:30.303 09:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:30.303 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:30.303 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:23:30.303 00:23:30.303 --- 10.0.0.2 ping statistics --- 00:23:30.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:30.303 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:23:30.303 09:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:30.303 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:30.303 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:23:30.303 00:23:30.303 --- 10.0.0.3 ping statistics --- 00:23:30.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:30.303 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:23:30.303 09:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:30.303 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:30.303 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:23:30.303 00:23:30.303 --- 10.0.0.1 ping statistics --- 00:23:30.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:30.303 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:23:30.303 09:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:30.303 09:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@433 -- # return 0 00:23:30.303 09:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:30.303 09:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:30.303 09:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:30.303 09:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:30.303 09:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:30.303 09:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:30.303 09:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:30.303 09:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:23:30.303 09:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:23:30.303 09:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:30.303 09:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:30.303 09:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=80802 00:23:30.303 09:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:30.303 09:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:30.303 09:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 80802 00:23:30.303 09:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 80802 ']' 00:23:30.303 09:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:30.303 09:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:30.303 09:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:30.303 09:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:30.303 09:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.562 [2024-07-25 09:05:37.500566] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:30.562 [2024-07-25 09:05:37.500711] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:30.562 [2024-07-25 09:05:37.670965] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:30.821 [2024-07-25 09:05:37.915405] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:30.821 [2024-07-25 09:05:37.915488] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:30.821 [2024-07-25 09:05:37.915507] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:30.821 [2024-07-25 09:05:37.915523] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:30.821 [2024-07-25 09:05:37.915539] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:30.821 [2024-07-25 09:05:37.915771] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:30.821 [2024-07-25 09:05:37.916026] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:30.821 [2024-07-25 09:05:37.916729] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:30.821 [2024-07-25 09:05:37.916736] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:31.080 [2024-07-25 09:05:38.125391] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:23:31.349 09:05:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:31.349 09:05:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:23:31.349 09:05:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:31.625 [2024-07-25 09:05:38.627017] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:31.625 09:05:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:23:31.625 09:05:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:31.625 09:05:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.625 09:05:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:23:31.884 Malloc1 00:23:31.884 09:05:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:32.451 09:05:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:32.451 09:05:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:32.709 [2024-07-25 09:05:39.789693] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:32.709 09:05:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:32.967 09:05:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:23:32.967 09:05:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:32.967 09:05:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:32.967 09:05:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:32.967 09:05:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:32.967 09:05:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:32.967 09:05:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:23:32.967 09:05:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:23:32.967 09:05:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:32.967 09:05:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:32.967 09:05:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:23:32.967 09:05:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:23:32.967 09:05:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:32.967 09:05:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:23:32.967 09:05:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:23:32.967 09:05:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:23:32.967 09:05:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:23:32.967 09:05:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:33.225 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:23:33.225 fio-3.35 00:23:33.225 Starting 1 thread 00:23:35.757 00:23:35.757 test: (groupid=0, jobs=1): err= 0: pid=80873: Thu Jul 25 09:05:42 2024 00:23:35.757 read: IOPS=6304, BW=24.6MiB/s (25.8MB/s)(49.5MiB/2009msec) 00:23:35.757 slat (usec): min=2, max=244, avg= 3.24, stdev= 3.00 00:23:35.757 clat (usec): min=2322, max=19485, avg=10552.40, stdev=1677.45 00:23:35.757 lat (usec): min=2361, max=19488, avg=10555.64, stdev=1677.30 00:23:35.757 clat percentiles (usec): 00:23:35.757 | 1.00th=[ 8356], 5.00th=[ 8848], 10.00th=[ 9110], 20.00th=[ 9372], 00:23:35.757 | 30.00th=[ 9634], 40.00th=[ 9765], 50.00th=[10028], 60.00th=[10290], 00:23:35.757 | 70.00th=[10814], 80.00th=[11731], 90.00th=[12911], 95.00th=[14091], 00:23:35.757 | 99.00th=[16188], 99.50th=[16909], 99.90th=[18482], 99.95th=[18482], 00:23:35.757 | 99.99th=[19530] 00:23:35.757 bw ( KiB/s): min=22672, max=27880, per=99.97%, avg=25212.00, stdev=2137.37, samples=4 00:23:35.757 iops : min= 5668, max= 6970, avg=6303.00, stdev=534.34, samples=4 00:23:35.757 write: IOPS=6297, BW=24.6MiB/s (25.8MB/s)(49.4MiB/2009msec); 0 zone resets 00:23:35.757 slat (usec): min=2, max=220, avg= 3.37, stdev= 2.26 00:23:35.757 clat (usec): min=2147, max=18422, avg=9616.59, stdev=1512.63 00:23:35.757 lat (usec): min=2158, max=18425, avg=9619.96, stdev=1512.54 00:23:35.757 clat percentiles (usec): 00:23:35.757 | 1.00th=[ 7635], 5.00th=[ 8094], 10.00th=[ 8291], 20.00th=[ 8586], 00:23:35.757 | 30.00th=[ 8717], 40.00th=[ 8979], 50.00th=[ 9110], 60.00th=[ 9372], 00:23:35.757 | 70.00th=[ 9765], 80.00th=[10683], 90.00th=[11731], 95.00th=[12780], 00:23:35.757 | 99.00th=[14746], 99.50th=[15270], 99.90th=[16909], 99.95th=[17433], 00:23:35.757 | 99.99th=[17695] 00:23:35.757 bw ( KiB/s): min=22152, max=27216, per=99.92%, avg=25170.00, stdev=2247.32, samples=4 00:23:35.757 iops : min= 5538, max= 6804, avg=6292.50, stdev=561.83, samples=4 00:23:35.757 lat (msec) : 4=0.11%, 10=60.69%, 20=39.19% 00:23:35.757 cpu : usr=73.71%, sys=19.47%, ctx=35, majf=0, minf=1538 00:23:35.757 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:23:35.757 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:35.757 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:35.757 issued rwts: total=12666,12652,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:35.757 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:35.757 00:23:35.757 Run status group 0 (all jobs): 00:23:35.757 READ: bw=24.6MiB/s (25.8MB/s), 24.6MiB/s-24.6MiB/s (25.8MB/s-25.8MB/s), io=49.5MiB (51.9MB), run=2009-2009msec 00:23:35.757 WRITE: bw=24.6MiB/s (25.8MB/s), 24.6MiB/s-24.6MiB/s (25.8MB/s-25.8MB/s), io=49.4MiB (51.8MB), run=2009-2009msec 00:23:35.757 ----------------------------------------------------- 00:23:35.757 Suppressions used: 00:23:35.757 count bytes template 00:23:35.757 1 57 /usr/src/fio/parse.c 00:23:35.757 1 8 libtcmalloc_minimal.so 00:23:35.757 ----------------------------------------------------- 00:23:35.757 00:23:35.757 09:05:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:35.757 09:05:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:35.757 09:05:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:35.757 09:05:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:35.757 09:05:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:35.757 09:05:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:23:35.757 09:05:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:23:35.757 09:05:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:35.757 09:05:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:35.757 09:05:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:23:35.757 09:05:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:23:35.757 09:05:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:35.757 09:05:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:23:35.757 09:05:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:23:35.757 09:05:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:23:35.757 09:05:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:23:35.758 09:05:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:36.016 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:23:36.016 fio-3.35 00:23:36.016 Starting 1 thread 00:23:38.550 00:23:38.550 test: (groupid=0, jobs=1): err= 0: pid=80915: Thu Jul 25 09:05:45 2024 00:23:38.550 read: IOPS=6095, BW=95.2MiB/s (99.9MB/s)(191MiB/2008msec) 00:23:38.550 slat (usec): min=3, max=141, avg= 5.45, stdev= 2.97 00:23:38.550 clat (usec): min=3561, max=23391, avg=11756.05, stdev=3404.40 00:23:38.550 lat (usec): min=3565, max=23401, avg=11761.50, stdev=3404.86 00:23:38.550 clat percentiles (usec): 00:23:38.550 | 1.00th=[ 5735], 5.00th=[ 6915], 10.00th=[ 7635], 20.00th=[ 8717], 00:23:38.550 | 30.00th=[ 9634], 40.00th=[10552], 50.00th=[11469], 60.00th=[12518], 00:23:38.550 | 70.00th=[13304], 80.00th=[14484], 90.00th=[16581], 95.00th=[18220], 00:23:38.550 | 99.00th=[20579], 99.50th=[21365], 99.90th=[22676], 99.95th=[22938], 00:23:38.550 | 99.99th=[22938] 00:23:38.550 bw ( KiB/s): min=38560, max=58528, per=49.32%, avg=48104.00, stdev=9076.36, samples=4 00:23:38.550 iops : min= 2410, max= 3658, avg=3006.50, stdev=567.27, samples=4 00:23:38.550 write: IOPS=3463, BW=54.1MiB/s (56.7MB/s)(98.4MiB/1818msec); 0 zone resets 00:23:38.550 slat (usec): min=37, max=199, avg=44.32, stdev= 9.44 00:23:38.550 clat (usec): min=5070, max=30614, avg=16736.88, stdev=3780.84 00:23:38.550 lat (usec): min=5110, max=30674, avg=16781.21, stdev=3784.12 00:23:38.550 clat percentiles (usec): 00:23:38.550 | 1.00th=[ 9896], 5.00th=[11863], 10.00th=[12518], 20.00th=[13435], 00:23:38.550 | 30.00th=[14222], 40.00th=[15270], 50.00th=[16188], 60.00th=[17171], 00:23:38.550 | 70.00th=[18482], 80.00th=[19792], 90.00th=[21890], 95.00th=[23987], 00:23:38.550 | 99.00th=[26870], 99.50th=[27919], 99.90th=[29492], 99.95th=[29754], 00:23:38.550 | 99.99th=[30540] 00:23:38.550 bw ( KiB/s): min=40128, max=60320, per=90.00%, avg=49872.00, stdev=8990.98, samples=4 00:23:38.550 iops : min= 2508, max= 3770, avg=3117.00, stdev=561.94, samples=4 00:23:38.550 lat (msec) : 4=0.05%, 10=22.94%, 20=69.55%, 50=7.46% 00:23:38.550 cpu : usr=79.33%, sys=14.99%, ctx=32, majf=0, minf=2068 00:23:38.550 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:23:38.550 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:38.550 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:38.550 issued rwts: total=12240,6296,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:38.550 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:38.550 00:23:38.550 Run status group 0 (all jobs): 00:23:38.550 READ: bw=95.2MiB/s (99.9MB/s), 95.2MiB/s-95.2MiB/s (99.9MB/s-99.9MB/s), io=191MiB (201MB), run=2008-2008msec 00:23:38.550 WRITE: bw=54.1MiB/s (56.7MB/s), 54.1MiB/s-54.1MiB/s (56.7MB/s-56.7MB/s), io=98.4MiB (103MB), run=1818-1818msec 00:23:38.550 ----------------------------------------------------- 00:23:38.550 Suppressions used: 00:23:38.550 count bytes template 00:23:38.550 1 57 /usr/src/fio/parse.c 00:23:38.550 88 8448 /usr/src/fio/iolog.c 00:23:38.550 1 8 libtcmalloc_minimal.so 00:23:38.550 ----------------------------------------------------- 00:23:38.550 00:23:38.550 09:05:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:38.809 09:05:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:23:38.809 09:05:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:23:38.809 09:05:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:23:38.809 09:05:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1513 -- # bdfs=() 00:23:38.809 09:05:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1513 -- # local bdfs 00:23:38.809 09:05:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:23:38.809 09:05:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:23:38.809 09:05:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:23:39.068 09:05:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:23:39.068 09:05:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:23:39.068 09:05:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 -i 10.0.0.2 00:23:39.326 Nvme0n1 00:23:39.326 09:05:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:23:39.584 09:05:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=4e61e650-f957-4d0d-b185-235bd2dd27f3 00:23:39.584 09:05:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 4e61e650-f957-4d0d-b185-235bd2dd27f3 00:23:39.584 09:05:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=4e61e650-f957-4d0d-b185-235bd2dd27f3 00:23:39.584 09:05:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:23:39.584 09:05:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:23:39.584 09:05:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:23:39.584 09:05:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:39.841 09:05:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:23:39.841 { 00:23:39.841 "uuid": "4e61e650-f957-4d0d-b185-235bd2dd27f3", 00:23:39.841 "name": "lvs_0", 00:23:39.841 "base_bdev": "Nvme0n1", 00:23:39.841 "total_data_clusters": 4, 00:23:39.841 "free_clusters": 4, 00:23:39.841 "block_size": 4096, 00:23:39.841 "cluster_size": 1073741824 00:23:39.841 } 00:23:39.841 ]' 00:23:39.841 09:05:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="4e61e650-f957-4d0d-b185-235bd2dd27f3") .free_clusters' 00:23:39.841 09:05:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=4 00:23:39.841 09:05:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="4e61e650-f957-4d0d-b185-235bd2dd27f3") .cluster_size' 00:23:39.841 4096 00:23:39.841 09:05:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:23:39.841 09:05:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=4096 00:23:39.841 09:05:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 4096 00:23:39.841 09:05:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:23:40.098 124bfc5e-e5d0-40b0-9e36-2acf52c6b540 00:23:40.098 09:05:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:23:40.357 09:05:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:23:40.614 09:05:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:40.926 09:05:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:40.926 09:05:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:40.926 09:05:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:40.926 09:05:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:40.926 09:05:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:40.926 09:05:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:23:40.926 09:05:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:23:40.926 09:05:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:40.926 09:05:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:40.926 09:05:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:23:40.926 09:05:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:23:40.926 09:05:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:40.926 09:05:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:23:40.926 09:05:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:23:40.926 09:05:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:23:40.926 09:05:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:23:40.926 09:05:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:41.202 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:23:41.202 fio-3.35 00:23:41.202 Starting 1 thread 00:23:43.726 00:23:43.726 test: (groupid=0, jobs=1): err= 0: pid=81017: Thu Jul 25 09:05:50 2024 00:23:43.726 read: IOPS=3981, BW=15.6MiB/s (16.3MB/s)(31.3MiB/2010msec) 00:23:43.726 slat (usec): min=2, max=227, avg= 3.85, stdev= 3.52 00:23:43.726 clat (usec): min=4357, max=28485, avg=16702.95, stdev=2668.43 00:23:43.726 lat (usec): min=4363, max=28488, avg=16706.80, stdev=2668.35 00:23:43.726 clat percentiles (usec): 00:23:43.726 | 1.00th=[12256], 5.00th=[13304], 10.00th=[13829], 20.00th=[14615], 00:23:43.726 | 30.00th=[15139], 40.00th=[15664], 50.00th=[16188], 60.00th=[16712], 00:23:43.726 | 70.00th=[17433], 80.00th=[18744], 90.00th=[20579], 95.00th=[21890], 00:23:43.726 | 99.00th=[24773], 99.50th=[25560], 99.90th=[26608], 99.95th=[27132], 00:23:43.726 | 99.99th=[28443] 00:23:43.726 bw ( KiB/s): min=13000, max=17056, per=99.63%, avg=15866.00, stdev=1921.46, samples=4 00:23:43.726 iops : min= 3250, max= 4264, avg=3966.50, stdev=480.36, samples=4 00:23:43.726 write: IOPS=3997, BW=15.6MiB/s (16.4MB/s)(31.4MiB/2010msec); 0 zone resets 00:23:43.726 slat (usec): min=2, max=156, avg= 4.01, stdev= 2.20 00:23:43.726 clat (usec): min=2607, max=25852, avg=15164.19, stdev=2398.59 00:23:43.726 lat (usec): min=2619, max=25855, avg=15168.20, stdev=2398.63 00:23:43.726 clat percentiles (usec): 00:23:43.726 | 1.00th=[11207], 5.00th=[12125], 10.00th=[12518], 20.00th=[13304], 00:23:43.726 | 30.00th=[13829], 40.00th=[14222], 50.00th=[14746], 60.00th=[15270], 00:23:43.726 | 70.00th=[15926], 80.00th=[16909], 90.00th=[18482], 95.00th=[19792], 00:23:43.726 | 99.00th=[22414], 99.50th=[22938], 99.90th=[23987], 99.95th=[24511], 00:23:43.726 | 99.99th=[25822] 00:23:43.726 bw ( KiB/s): min=13512, max=17408, per=99.77%, avg=15954.00, stdev=1734.58, samples=4 00:23:43.726 iops : min= 3378, max= 4352, avg=3988.50, stdev=433.65, samples=4 00:23:43.726 lat (msec) : 4=0.01%, 10=0.26%, 20=91.39%, 50=8.34% 00:23:43.726 cpu : usr=74.27%, sys=20.26%, ctx=11, majf=0, minf=1538 00:23:43.726 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:23:43.726 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:43.726 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:43.726 issued rwts: total=8002,8035,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:43.726 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:43.726 00:23:43.726 Run status group 0 (all jobs): 00:23:43.726 READ: bw=15.6MiB/s (16.3MB/s), 15.6MiB/s-15.6MiB/s (16.3MB/s-16.3MB/s), io=31.3MiB (32.8MB), run=2010-2010msec 00:23:43.726 WRITE: bw=15.6MiB/s (16.4MB/s), 15.6MiB/s-15.6MiB/s (16.4MB/s-16.4MB/s), io=31.4MiB (32.9MB), run=2010-2010msec 00:23:43.726 ----------------------------------------------------- 00:23:43.726 Suppressions used: 00:23:43.726 count bytes template 00:23:43.726 1 58 /usr/src/fio/parse.c 00:23:43.726 1 8 libtcmalloc_minimal.so 00:23:43.726 ----------------------------------------------------- 00:23:43.726 00:23:43.726 09:05:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:43.984 09:05:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:23:44.547 09:05:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=b46a0894-21e7-4179-a49e-cb4ea6b699fa 00:23:44.547 09:05:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb b46a0894-21e7-4179-a49e-cb4ea6b699fa 00:23:44.547 09:05:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=b46a0894-21e7-4179-a49e-cb4ea6b699fa 00:23:44.547 09:05:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:23:44.547 09:05:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:23:44.547 09:05:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:23:44.547 09:05:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:44.547 09:05:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:23:44.547 { 00:23:44.547 "uuid": "4e61e650-f957-4d0d-b185-235bd2dd27f3", 00:23:44.547 "name": "lvs_0", 00:23:44.547 "base_bdev": "Nvme0n1", 00:23:44.547 "total_data_clusters": 4, 00:23:44.547 "free_clusters": 0, 00:23:44.547 "block_size": 4096, 00:23:44.547 "cluster_size": 1073741824 00:23:44.547 }, 00:23:44.547 { 00:23:44.547 "uuid": "b46a0894-21e7-4179-a49e-cb4ea6b699fa", 00:23:44.547 "name": "lvs_n_0", 00:23:44.547 "base_bdev": "124bfc5e-e5d0-40b0-9e36-2acf52c6b540", 00:23:44.547 "total_data_clusters": 1022, 00:23:44.547 "free_clusters": 1022, 00:23:44.547 "block_size": 4096, 00:23:44.547 "cluster_size": 4194304 00:23:44.547 } 00:23:44.547 ]' 00:23:44.548 09:05:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="b46a0894-21e7-4179-a49e-cb4ea6b699fa") .free_clusters' 00:23:44.805 09:05:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=1022 00:23:44.805 09:05:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="b46a0894-21e7-4179-a49e-cb4ea6b699fa") .cluster_size' 00:23:44.805 4088 00:23:44.805 09:05:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:23:44.805 09:05:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=4088 00:23:44.805 09:05:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 4088 00:23:44.805 09:05:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:23:45.063 fb733260-5f86-4aac-996f-44d1fd0fa967 00:23:45.063 09:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:23:45.321 09:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:23:45.886 09:05:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:23:46.144 09:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:46.144 09:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:46.144 09:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:46.144 09:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:46.144 09:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:46.144 09:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:23:46.144 09:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:23:46.144 09:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:46.144 09:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:46.144 09:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:23:46.144 09:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:46.144 09:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:23:46.144 09:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:23:46.144 09:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:23:46.144 09:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:23:46.144 09:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:23:46.144 09:05:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:46.144 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:23:46.144 fio-3.35 00:23:46.144 Starting 1 thread 00:23:48.675 00:23:48.675 test: (groupid=0, jobs=1): err= 0: pid=81094: Thu Jul 25 09:05:55 2024 00:23:48.675 read: IOPS=4465, BW=17.4MiB/s (18.3MB/s)(35.1MiB/2012msec) 00:23:48.675 slat (usec): min=2, max=180, avg= 3.20, stdev= 2.69 00:23:48.675 clat (usec): min=4234, max=26608, avg=14974.22, stdev=1312.77 00:23:48.675 lat (usec): min=4245, max=26611, avg=14977.42, stdev=1312.46 00:23:48.675 clat percentiles (usec): 00:23:48.675 | 1.00th=[12256], 5.00th=[13173], 10.00th=[13566], 20.00th=[13960], 00:23:48.675 | 30.00th=[14353], 40.00th=[14615], 50.00th=[14877], 60.00th=[15270], 00:23:48.675 | 70.00th=[15533], 80.00th=[15926], 90.00th=[16450], 95.00th=[16909], 00:23:48.675 | 99.00th=[17957], 99.50th=[18482], 99.90th=[25035], 99.95th=[25297], 00:23:48.675 | 99.99th=[26608] 00:23:48.675 bw ( KiB/s): min=16832, max=18360, per=99.72%, avg=17810.00, stdev=671.76, samples=4 00:23:48.675 iops : min= 4208, max= 4590, avg=4452.50, stdev=167.94, samples=4 00:23:48.675 write: IOPS=4454, BW=17.4MiB/s (18.2MB/s)(35.0MiB/2012msec); 0 zone resets 00:23:48.675 slat (usec): min=2, max=133, avg= 3.34, stdev= 1.91 00:23:48.675 clat (usec): min=2691, max=25119, avg=13525.73, stdev=1221.80 00:23:48.675 lat (usec): min=2705, max=25122, avg=13529.08, stdev=1221.62 00:23:48.675 clat percentiles (usec): 00:23:48.675 | 1.00th=[10945], 5.00th=[11863], 10.00th=[12125], 20.00th=[12649], 00:23:48.675 | 30.00th=[12911], 40.00th=[13173], 50.00th=[13435], 60.00th=[13829], 00:23:48.675 | 70.00th=[14091], 80.00th=[14484], 90.00th=[14877], 95.00th=[15270], 00:23:48.675 | 99.00th=[16188], 99.50th=[16581], 99.90th=[23462], 99.95th=[23725], 00:23:48.675 | 99.99th=[25035] 00:23:48.675 bw ( KiB/s): min=17752, max=17880, per=100.00%, avg=17830.00, stdev=55.57, samples=4 00:23:48.675 iops : min= 4438, max= 4470, avg=4457.50, stdev=13.89, samples=4 00:23:48.675 lat (msec) : 4=0.02%, 10=0.31%, 20=99.47%, 50=0.21% 00:23:48.675 cpu : usr=76.48%, sys=18.60%, ctx=16, majf=0, minf=1538 00:23:48.675 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:23:48.675 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:48.675 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:48.675 issued rwts: total=8984,8963,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:48.675 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:48.675 00:23:48.675 Run status group 0 (all jobs): 00:23:48.675 READ: bw=17.4MiB/s (18.3MB/s), 17.4MiB/s-17.4MiB/s (18.3MB/s-18.3MB/s), io=35.1MiB (36.8MB), run=2012-2012msec 00:23:48.675 WRITE: bw=17.4MiB/s (18.2MB/s), 17.4MiB/s-17.4MiB/s (18.2MB/s-18.2MB/s), io=35.0MiB (36.7MB), run=2012-2012msec 00:23:48.675 ----------------------------------------------------- 00:23:48.675 Suppressions used: 00:23:48.675 count bytes template 00:23:48.675 1 58 /usr/src/fio/parse.c 00:23:48.675 1 8 libtcmalloc_minimal.so 00:23:48.675 ----------------------------------------------------- 00:23:48.675 00:23:48.675 09:05:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:23:48.934 09:05:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:23:49.194 09:05:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:23:49.453 09:05:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:23:49.453 09:05:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:23:49.711 09:05:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:23:50.276 09:05:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:23:50.533 09:05:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:23:50.533 09:05:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:23:50.533 09:05:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:23:50.533 09:05:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:50.533 09:05:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:23:50.533 09:05:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:50.533 09:05:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:23:50.533 09:05:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:50.533 09:05:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:50.533 rmmod nvme_tcp 00:23:50.791 rmmod nvme_fabrics 00:23:50.791 rmmod nvme_keyring 00:23:50.791 09:05:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:50.791 09:05:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:23:50.791 09:05:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:23:50.791 09:05:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 80802 ']' 00:23:50.791 09:05:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 80802 00:23:50.791 09:05:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 80802 ']' 00:23:50.791 09:05:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 80802 00:23:50.791 09:05:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:23:50.791 09:05:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:50.791 09:05:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80802 00:23:50.791 09:05:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:50.791 09:05:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:50.791 killing process with pid 80802 00:23:50.791 09:05:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80802' 00:23:50.791 09:05:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 80802 00:23:50.791 09:05:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 80802 00:23:52.174 09:05:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:52.174 09:05:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:52.174 09:05:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:52.174 09:05:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:52.174 09:05:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:52.174 09:05:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:52.174 09:05:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:52.174 09:05:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:52.174 09:05:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:52.174 00:23:52.174 real 0m22.226s 00:23:52.174 user 1m36.332s 00:23:52.174 sys 0m4.786s 00:23:52.174 09:05:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:52.174 09:05:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.174 ************************************ 00:23:52.174 END TEST nvmf_fio_host 00:23:52.174 ************************************ 00:23:52.174 09:05:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:52.174 09:05:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:52.174 09:05:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:52.174 09:05:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.174 ************************************ 00:23:52.174 START TEST nvmf_failover 00:23:52.174 ************************************ 00:23:52.174 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:52.174 * Looking for test storage... 00:23:52.174 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:52.174 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:52.174 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:23:52.174 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:52.174 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:52.174 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:52.174 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:52.174 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:52.174 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:52.174 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:52.174 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:52.174 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:52.174 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:52.174 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:23:52.174 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:23:52.174 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:52.174 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:52.174 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:52.174 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:52.174 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:52.174 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:52.174 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:52.174 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:52.174 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.174 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.175 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.175 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:23:52.175 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.175 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:23:52.175 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:52.175 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:52.175 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:52.175 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:52.175 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:52.175 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:52.175 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:52.175 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:52.175 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:52.175 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:52.175 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:52.175 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:52.175 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:23:52.175 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:52.175 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:52.175 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:52.175 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:52.175 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:52.175 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:52.175 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:52.175 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:52.434 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:23:52.434 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:23:52.434 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:23:52.434 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:23:52.434 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:23:52.434 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # nvmf_veth_init 00:23:52.434 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:52.434 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:52.434 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:52.434 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:52.434 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:52.434 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:52.434 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:52.434 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:52.434 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:52.434 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:52.434 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:52.434 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:52.434 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:52.434 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:52.434 Cannot find device "nvmf_tgt_br" 00:23:52.434 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # true 00:23:52.434 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:52.434 Cannot find device "nvmf_tgt_br2" 00:23:52.434 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # true 00:23:52.434 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:52.434 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:52.434 Cannot find device "nvmf_tgt_br" 00:23:52.434 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # true 00:23:52.434 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:52.434 Cannot find device "nvmf_tgt_br2" 00:23:52.434 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # true 00:23:52.434 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:52.434 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:52.434 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:52.434 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:52.434 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:23:52.434 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:52.434 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:52.434 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:23:52.434 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:52.434 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:52.434 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:52.434 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:52.434 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:52.434 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:52.434 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:52.434 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:52.434 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:52.434 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:52.434 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:52.434 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:52.434 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:52.434 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:52.434 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:52.434 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:52.434 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:52.434 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:52.434 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:52.693 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:52.693 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:52.693 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:52.693 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:52.693 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:52.693 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:52.693 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.153 ms 00:23:52.693 00:23:52.693 --- 10.0.0.2 ping statistics --- 00:23:52.693 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:52.693 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:23:52.693 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:52.693 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:52.693 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:23:52.693 00:23:52.693 --- 10.0.0.3 ping statistics --- 00:23:52.693 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:52.693 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:23:52.693 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:52.693 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:52.693 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:23:52.693 00:23:52.693 --- 10.0.0.1 ping statistics --- 00:23:52.693 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:52.693 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:23:52.693 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:52.693 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@433 -- # return 0 00:23:52.693 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:52.693 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:52.694 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:52.694 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:52.694 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:52.694 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:52.694 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:52.694 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:23:52.694 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:52.694 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:52.694 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:52.694 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=81342 00:23:52.694 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:52.694 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 81342 00:23:52.694 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 81342 ']' 00:23:52.694 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:52.694 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:52.694 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:52.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:52.694 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:52.694 09:05:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:52.694 [2024-07-25 09:05:59.740158] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:52.694 [2024-07-25 09:05:59.740334] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:52.953 [2024-07-25 09:05:59.914872] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:53.212 [2024-07-25 09:06:00.194561] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:53.212 [2024-07-25 09:06:00.194633] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:53.212 [2024-07-25 09:06:00.194652] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:53.212 [2024-07-25 09:06:00.194667] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:53.212 [2024-07-25 09:06:00.194680] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:53.212 [2024-07-25 09:06:00.194838] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:53.212 [2024-07-25 09:06:00.194994] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:53.212 [2024-07-25 09:06:00.195009] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:53.470 [2024-07-25 09:06:00.406022] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:23:53.729 09:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:53.729 09:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:23:53.729 09:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:53.729 09:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:53.729 09:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:53.729 09:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:53.729 09:06:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:53.988 [2024-07-25 09:06:00.969344] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:53.988 09:06:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:54.246 Malloc0 00:23:54.246 09:06:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:54.504 09:06:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:55.071 09:06:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:55.071 [2024-07-25 09:06:02.175321] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:55.329 09:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:55.587 [2024-07-25 09:06:02.447504] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:55.587 09:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:55.845 [2024-07-25 09:06:02.723792] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:55.845 09:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=81404 00:23:55.845 09:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:55.845 09:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:23:55.845 09:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 81404 /var/tmp/bdevperf.sock 00:23:55.845 09:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 81404 ']' 00:23:55.845 09:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:55.845 09:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:55.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:55.845 09:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:55.845 09:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:55.845 09:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:56.780 09:06:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:56.780 09:06:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:23:56.780 09:06:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:57.039 NVMe0n1 00:23:57.039 09:06:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:57.297 00:23:57.555 09:06:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=81429 00:23:57.555 09:06:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:57.555 09:06:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:23:58.488 09:06:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:58.746 09:06:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:24:02.028 09:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:02.028 00:24:02.286 09:06:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:02.544 09:06:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:24:05.841 09:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:05.841 [2024-07-25 09:06:12.762139] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:05.841 09:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:24:06.776 09:06:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:07.341 09:06:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 81429 00:24:12.610 0 00:24:12.610 09:06:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 81404 00:24:12.610 09:06:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 81404 ']' 00:24:12.610 09:06:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 81404 00:24:12.610 09:06:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:24:12.610 09:06:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:12.610 09:06:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81404 00:24:12.610 killing process with pid 81404 00:24:12.610 09:06:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:12.610 09:06:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:12.610 09:06:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81404' 00:24:12.610 09:06:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 81404 00:24:12.610 09:06:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 81404 00:24:13.990 09:06:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:13.990 [2024-07-25 09:06:02.843568] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:24:13.990 [2024-07-25 09:06:02.843761] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81404 ] 00:24:13.990 [2024-07-25 09:06:03.016416] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:13.990 [2024-07-25 09:06:03.338292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:13.990 [2024-07-25 09:06:03.586596] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:24:13.990 Running I/O for 15 seconds... 00:24:13.991 [2024-07-25 09:06:05.696969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:13.991 [2024-07-25 09:06:05.697534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.991 [2024-07-25 09:06:05.697681] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:13.991 [2024-07-25 09:06:05.697838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.991 [2024-07-25 09:06:05.697975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:13.991 [2024-07-25 09:06:05.698096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.991 [2024-07-25 09:06:05.698207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:13.991 [2024-07-25 09:06:05.698315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.991 [2024-07-25 09:06:05.698416] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(5) to be set 00:24:13.991 [2024-07-25 09:06:05.698882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:46240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.991 [2024-07-25 09:06:05.699026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.991 [2024-07-25 09:06:05.699166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:46368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.991 [2024-07-25 09:06:05.699325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.991 [2024-07-25 09:06:05.699441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:46376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.991 [2024-07-25 09:06:05.699561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.991 [2024-07-25 09:06:05.699667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:46384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.991 [2024-07-25 09:06:05.699781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.991 [2024-07-25 09:06:05.699920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:46392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.991 [2024-07-25 09:06:05.700083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.991 [2024-07-25 09:06:05.700197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:46400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.991 [2024-07-25 09:06:05.700313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.991 [2024-07-25 09:06:05.700455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:46408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.991 [2024-07-25 09:06:05.700578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.991 [2024-07-25 09:06:05.700684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:46416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.991 [2024-07-25 09:06:05.700795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.991 [2024-07-25 09:06:05.700928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:46424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.991 [2024-07-25 09:06:05.701045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.991 [2024-07-25 09:06:05.701155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:46432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.991 [2024-07-25 09:06:05.701271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.991 [2024-07-25 09:06:05.701378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:46440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.991 [2024-07-25 09:06:05.701491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.991 [2024-07-25 09:06:05.701599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:46448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.991 [2024-07-25 09:06:05.701711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.991 [2024-07-25 09:06:05.701831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:46456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.991 [2024-07-25 09:06:05.701962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.991 [2024-07-25 09:06:05.702075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:46464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.991 [2024-07-25 09:06:05.702190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.991 [2024-07-25 09:06:05.702296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:46472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.991 [2024-07-25 09:06:05.702414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.991 [2024-07-25 09:06:05.702519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:46480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.991 [2024-07-25 09:06:05.702628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.991 [2024-07-25 09:06:05.702738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:46488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.991 [2024-07-25 09:06:05.702862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.991 [2024-07-25 09:06:05.702976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:46496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.991 [2024-07-25 09:06:05.703090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.991 [2024-07-25 09:06:05.703133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:46504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.991 [2024-07-25 09:06:05.703166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.991 [2024-07-25 09:06:05.703208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:46512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.991 [2024-07-25 09:06:05.703239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.991 [2024-07-25 09:06:05.703268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:46520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.991 [2024-07-25 09:06:05.703297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.991 [2024-07-25 09:06:05.703325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:46528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.991 [2024-07-25 09:06:05.703353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.991 [2024-07-25 09:06:05.703382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:46536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.991 [2024-07-25 09:06:05.703416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.991 [2024-07-25 09:06:05.703445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:46544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.991 [2024-07-25 09:06:05.703473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.991 [2024-07-25 09:06:05.703501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:46552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.991 [2024-07-25 09:06:05.703530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.991 [2024-07-25 09:06:05.703558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:46560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.991 [2024-07-25 09:06:05.703587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.991 [2024-07-25 09:06:05.703615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:46568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.991 [2024-07-25 09:06:05.703644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.991 [2024-07-25 09:06:05.703671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:46576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.992 [2024-07-25 09:06:05.703700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.992 [2024-07-25 09:06:05.703728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:46584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.992 [2024-07-25 09:06:05.703756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.992 [2024-07-25 09:06:05.703784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:46592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.992 [2024-07-25 09:06:05.704824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.992 [2024-07-25 09:06:05.704949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:46600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.992 [2024-07-25 09:06:05.705068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.992 [2024-07-25 09:06:05.705177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:46608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.992 [2024-07-25 09:06:05.705308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.992 [2024-07-25 09:06:05.705413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:46616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.992 [2024-07-25 09:06:05.705525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.992 [2024-07-25 09:06:05.705632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:46624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.992 [2024-07-25 09:06:05.705746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.992 [2024-07-25 09:06:05.705864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:46632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.992 [2024-07-25 09:06:05.705986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.992 [2024-07-25 09:06:05.706093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:46640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.992 [2024-07-25 09:06:05.706212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.992 [2024-07-25 09:06:05.706323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:46648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.992 [2024-07-25 09:06:05.706437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.992 [2024-07-25 09:06:05.706542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:46656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.992 [2024-07-25 09:06:05.706656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.992 [2024-07-25 09:06:05.706762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:46664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.992 [2024-07-25 09:06:05.706901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.992 [2024-07-25 09:06:05.707012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:46672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.992 [2024-07-25 09:06:05.707126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.992 [2024-07-25 09:06:05.707237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:46680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.992 [2024-07-25 09:06:05.707354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.992 [2024-07-25 09:06:05.707459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:46688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.992 [2024-07-25 09:06:05.707594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.992 [2024-07-25 09:06:05.707702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:46696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.992 [2024-07-25 09:06:05.707830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.992 [2024-07-25 09:06:05.707982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:46704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.992 [2024-07-25 09:06:05.708106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.992 [2024-07-25 09:06:05.708217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:46712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.992 [2024-07-25 09:06:05.708326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.992 [2024-07-25 09:06:05.708431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:46720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.992 [2024-07-25 09:06:05.708553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.992 [2024-07-25 09:06:05.708653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:46728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.992 [2024-07-25 09:06:05.708769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.992 [2024-07-25 09:06:05.708883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:46736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.992 [2024-07-25 09:06:05.709010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.992 [2024-07-25 09:06:05.709118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:46744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.992 [2024-07-25 09:06:05.709236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.992 [2024-07-25 09:06:05.709343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:46752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.992 [2024-07-25 09:06:05.709450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.992 [2024-07-25 09:06:05.709561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:46760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.992 [2024-07-25 09:06:05.709611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.992 [2024-07-25 09:06:05.709644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:46768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.992 [2024-07-25 09:06:05.709673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.992 [2024-07-25 09:06:05.709701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:46776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.992 [2024-07-25 09:06:05.709730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.992 [2024-07-25 09:06:05.709758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:46784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.992 [2024-07-25 09:06:05.709787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.992 [2024-07-25 09:06:05.709842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:46792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.992 [2024-07-25 09:06:05.709881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.992 [2024-07-25 09:06:05.709911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:46800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.992 [2024-07-25 09:06:05.709940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.992 [2024-07-25 09:06:05.709968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:46808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.992 [2024-07-25 09:06:05.710009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.992 [2024-07-25 09:06:05.710039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:46816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.992 [2024-07-25 09:06:05.710068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.992 [2024-07-25 09:06:05.710095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:46824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.992 [2024-07-25 09:06:05.710126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.992 [2024-07-25 09:06:05.710154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:46832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.992 [2024-07-25 09:06:05.710182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.992 [2024-07-25 09:06:05.710210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:46840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.992 [2024-07-25 09:06:05.710238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.992 [2024-07-25 09:06:05.710266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:46848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.993 [2024-07-25 09:06:05.710294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.993 [2024-07-25 09:06:05.710322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:46856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.993 [2024-07-25 09:06:05.710353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.993 [2024-07-25 09:06:05.710381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:46864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.993 [2024-07-25 09:06:05.710417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.993 [2024-07-25 09:06:05.710446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:46872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.993 [2024-07-25 09:06:05.710474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.993 [2024-07-25 09:06:05.710502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:46880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.993 [2024-07-25 09:06:05.710531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.993 [2024-07-25 09:06:05.710558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:46888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.993 [2024-07-25 09:06:05.710587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.993 [2024-07-25 09:06:05.710615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:46896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.993 [2024-07-25 09:06:05.710643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.993 [2024-07-25 09:06:05.710671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:46904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.993 [2024-07-25 09:06:05.710699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.993 [2024-07-25 09:06:05.710727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:46912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.993 [2024-07-25 09:06:05.710764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.993 [2024-07-25 09:06:05.710793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:46920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.993 [2024-07-25 09:06:05.712546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.993 [2024-07-25 09:06:05.712648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:46928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.993 [2024-07-25 09:06:05.712763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.993 [2024-07-25 09:06:05.712890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:46936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.993 [2024-07-25 09:06:05.713006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.993 [2024-07-25 09:06:05.713114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:46944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.993 [2024-07-25 09:06:05.713230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.993 [2024-07-25 09:06:05.713324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:46952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.993 [2024-07-25 09:06:05.713440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.993 [2024-07-25 09:06:05.713544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:46960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.993 [2024-07-25 09:06:05.713665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.993 [2024-07-25 09:06:05.713777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:46968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.993 [2024-07-25 09:06:05.713923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.993 [2024-07-25 09:06:05.714036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:46976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.993 [2024-07-25 09:06:05.714148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.993 [2024-07-25 09:06:05.714248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:46984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.993 [2024-07-25 09:06:05.714365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.993 [2024-07-25 09:06:05.714471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:46992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.993 [2024-07-25 09:06:05.714586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.993 [2024-07-25 09:06:05.714695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:47000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.993 [2024-07-25 09:06:05.714827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.993 [2024-07-25 09:06:05.714945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:47008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.993 [2024-07-25 09:06:05.715061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.993 [2024-07-25 09:06:05.715172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.993 [2024-07-25 09:06:05.715290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.993 [2024-07-25 09:06:05.715407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:47024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.993 [2024-07-25 09:06:05.715524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.993 [2024-07-25 09:06:05.715618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:47032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.993 [2024-07-25 09:06:05.715737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.993 [2024-07-25 09:06:05.715859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:47040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.993 [2024-07-25 09:06:05.715999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.993 [2024-07-25 09:06:05.716122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:47048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.993 [2024-07-25 09:06:05.716239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.993 [2024-07-25 09:06:05.716334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:47056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.993 [2024-07-25 09:06:05.716444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.993 [2024-07-25 09:06:05.716549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:47064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.993 [2024-07-25 09:06:05.716668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.993 [2024-07-25 09:06:05.716773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:47072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.993 [2024-07-25 09:06:05.716911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.993 [2024-07-25 09:06:05.717021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:47080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.993 [2024-07-25 09:06:05.717140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.993 [2024-07-25 09:06:05.717250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:47088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.993 [2024-07-25 09:06:05.717367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.993 [2024-07-25 09:06:05.717473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:47096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.993 [2024-07-25 09:06:05.717586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.993 [2024-07-25 09:06:05.717698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:47104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.993 [2024-07-25 09:06:05.717830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.993 [2024-07-25 09:06:05.717941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:47112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.993 [2024-07-25 09:06:05.718071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.993 [2024-07-25 09:06:05.718167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:47120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.993 [2024-07-25 09:06:05.718291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.993 [2024-07-25 09:06:05.718405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:47128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.994 [2024-07-25 09:06:05.718518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.994 [2024-07-25 09:06:05.718623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:47136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.994 [2024-07-25 09:06:05.718747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.994 [2024-07-25 09:06:05.718875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:47144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.994 [2024-07-25 09:06:05.718985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.994 [2024-07-25 09:06:05.719092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:47152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.994 [2024-07-25 09:06:05.719207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.994 [2024-07-25 09:06:05.719312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:47160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.994 [2024-07-25 09:06:05.719421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.994 [2024-07-25 09:06:05.719526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:47168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.994 [2024-07-25 09:06:05.719636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.994 [2024-07-25 09:06:05.719730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:47176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.994 [2024-07-25 09:06:05.719865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.994 [2024-07-25 09:06:05.720010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:47184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.994 [2024-07-25 09:06:05.720133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.994 [2024-07-25 09:06:05.720239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:47192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.994 [2024-07-25 09:06:05.720351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.994 [2024-07-25 09:06:05.720457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:47200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.994 [2024-07-25 09:06:05.720518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.994 [2024-07-25 09:06:05.720551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:47208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.994 [2024-07-25 09:06:05.720580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.994 [2024-07-25 09:06:05.720622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:47216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.994 [2024-07-25 09:06:05.720655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.994 [2024-07-25 09:06:05.720684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:47224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.994 [2024-07-25 09:06:05.720713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.994 [2024-07-25 09:06:05.720741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:47232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.994 [2024-07-25 09:06:05.720769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.994 [2024-07-25 09:06:05.720797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:47240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.994 [2024-07-25 09:06:05.720847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.994 [2024-07-25 09:06:05.720878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:46248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.994 [2024-07-25 09:06:05.720906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.994 [2024-07-25 09:06:05.720935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:46256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.994 [2024-07-25 09:06:05.720964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.994 [2024-07-25 09:06:05.720992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:46264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.994 [2024-07-25 09:06:05.721025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.994 [2024-07-25 09:06:05.721053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:46272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.994 [2024-07-25 09:06:05.721081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.994 [2024-07-25 09:06:05.721109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:46280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.994 [2024-07-25 09:06:05.721137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.994 [2024-07-25 09:06:05.721165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:46288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.994 [2024-07-25 09:06:05.721193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.994 [2024-07-25 09:06:05.721225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:46296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.994 [2024-07-25 09:06:05.721253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.994 [2024-07-25 09:06:05.721281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:46304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.994 [2024-07-25 09:06:05.721312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.994 [2024-07-25 09:06:05.721340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:46312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.994 [2024-07-25 09:06:05.721368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.994 [2024-07-25 09:06:05.721408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:46320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.994 [2024-07-25 09:06:05.721438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.994 [2024-07-25 09:06:05.721466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:46328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.994 [2024-07-25 09:06:05.721495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.994 [2024-07-25 09:06:05.721523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:46336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.994 [2024-07-25 09:06:05.721551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.994 [2024-07-25 09:06:05.721579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:46344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.994 [2024-07-25 09:06:05.721607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.994 [2024-07-25 09:06:05.721635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:46352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.994 [2024-07-25 09:06:05.721663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.994 [2024-07-25 09:06:05.721691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:46360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.994 [2024-07-25 09:06:05.721719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.994 [2024-07-25 09:06:05.721746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:47248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.994 [2024-07-25 09:06:05.721777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.994 [2024-07-25 09:06:05.721804] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b780 is same with the state(5) to be set 00:24:13.994 [2024-07-25 09:06:05.721856] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:13.994 [2024-07-25 09:06:05.721879] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:13.994 [2024-07-25 09:06:05.721901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47256 len:8 PRP1 0x0 PRP2 0x0 00:24:13.994 [2024-07-25 09:06:05.721924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.994 [2024-07-25 09:06:05.722217] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002b780 was disconnected and freed. reset controller. 00:24:13.994 [2024-07-25 09:06:05.722252] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:13.994 [2024-07-25 09:06:05.722287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:13.994 [2024-07-25 09:06:05.722424] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:24:13.994 [2024-07-25 09:06:05.728750] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:13.995 [2024-07-25 09:06:05.782903] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:13.995 [2024-07-25 09:06:09.420974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:1224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.995 [2024-07-25 09:06:09.421707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.995 [2024-07-25 09:06:09.421929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:1232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.995 [2024-07-25 09:06:09.422065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.995 [2024-07-25 09:06:09.422175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.995 [2024-07-25 09:06:09.422298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.995 [2024-07-25 09:06:09.422415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:1248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.995 [2024-07-25 09:06:09.422524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.995 [2024-07-25 09:06:09.422637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:1256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.995 [2024-07-25 09:06:09.422746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.995 [2024-07-25 09:06:09.422874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:1264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.995 [2024-07-25 09:06:09.422989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.995 [2024-07-25 09:06:09.423100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.995 [2024-07-25 09:06:09.423215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.995 [2024-07-25 09:06:09.423319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.995 [2024-07-25 09:06:09.423432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.995 [2024-07-25 09:06:09.423536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.995 [2024-07-25 09:06:09.423649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.995 [2024-07-25 09:06:09.423776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.995 [2024-07-25 09:06:09.423936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.995 [2024-07-25 09:06:09.424074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.995 [2024-07-25 09:06:09.424186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.995 [2024-07-25 09:06:09.424290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.995 [2024-07-25 09:06:09.424399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.995 [2024-07-25 09:06:09.424504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.995 [2024-07-25 09:06:09.424613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.995 [2024-07-25 09:06:09.424717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.995 [2024-07-25 09:06:09.424886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.995 [2024-07-25 09:06:09.425006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.995 [2024-07-25 09:06:09.425117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.995 [2024-07-25 09:06:09.425222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.995 [2024-07-25 09:06:09.425331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.995 [2024-07-25 09:06:09.425446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.995 [2024-07-25 09:06:09.425570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.995 [2024-07-25 09:06:09.425675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:1296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.995 [2024-07-25 09:06:09.425780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.995 [2024-07-25 09:06:09.425922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:1304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.995 [2024-07-25 09:06:09.426050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.995 [2024-07-25 09:06:09.426157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:1312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.995 [2024-07-25 09:06:09.426272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.995 [2024-07-25 09:06:09.426366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.995 [2024-07-25 09:06:09.426482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.995 [2024-07-25 09:06:09.426599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:1328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.995 [2024-07-25 09:06:09.426705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.995 [2024-07-25 09:06:09.426827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:1336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.995 [2024-07-25 09:06:09.426957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.995 [2024-07-25 09:06:09.427080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:1344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.995 [2024-07-25 09:06:09.427191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.995 [2024-07-25 09:06:09.427301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:1352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.995 [2024-07-25 09:06:09.427413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.995 [2024-07-25 09:06:09.427517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:1360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.995 [2024-07-25 09:06:09.427642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.995 [2024-07-25 09:06:09.427758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:1368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.995 [2024-07-25 09:06:09.427898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.995 [2024-07-25 09:06:09.428029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:1376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.995 [2024-07-25 09:06:09.428144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.995 [2024-07-25 09:06:09.428260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.995 [2024-07-25 09:06:09.428374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.995 [2024-07-25 09:06:09.428467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.995 [2024-07-25 09:06:09.428644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.995 [2024-07-25 09:06:09.428741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:1400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.995 [2024-07-25 09:06:09.428878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.995 [2024-07-25 09:06:09.428989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.995 [2024-07-25 09:06:09.429106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.995 [2024-07-25 09:06:09.429212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.995 [2024-07-25 09:06:09.429335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.995 [2024-07-25 09:06:09.429438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:1424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.995 [2024-07-25 09:06:09.429552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.995 [2024-07-25 09:06:09.429686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:1432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.996 [2024-07-25 09:06:09.429795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.996 [2024-07-25 09:06:09.429944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:1440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.996 [2024-07-25 09:06:09.430074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.996 [2024-07-25 09:06:09.430180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.996 [2024-07-25 09:06:09.430297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.996 [2024-07-25 09:06:09.430390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:1456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.996 [2024-07-25 09:06:09.430510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.996 [2024-07-25 09:06:09.430621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:1464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.996 [2024-07-25 09:06:09.430727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.996 [2024-07-25 09:06:09.430883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.996 [2024-07-25 09:06:09.430998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.996 [2024-07-25 09:06:09.431103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.996 [2024-07-25 09:06:09.431228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.996 [2024-07-25 09:06:09.431328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.996 [2024-07-25 09:06:09.431448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.996 [2024-07-25 09:06:09.431551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.996 [2024-07-25 09:06:09.431660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.996 [2024-07-25 09:06:09.431775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.996 [2024-07-25 09:06:09.431913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.996 [2024-07-25 09:06:09.432049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:1000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.996 [2024-07-25 09:06:09.432162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.996 [2024-07-25 09:06:09.432265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.996 [2024-07-25 09:06:09.432385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.996 [2024-07-25 09:06:09.432498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:1016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.996 [2024-07-25 09:06:09.432615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.996 [2024-07-25 09:06:09.432720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:1024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.996 [2024-07-25 09:06:09.432860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.996 [2024-07-25 09:06:09.432975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:1480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.996 [2024-07-25 09:06:09.433085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.996 [2024-07-25 09:06:09.433188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.996 [2024-07-25 09:06:09.433296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.996 [2024-07-25 09:06:09.433413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:1496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.996 [2024-07-25 09:06:09.433525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.996 [2024-07-25 09:06:09.433637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.996 [2024-07-25 09:06:09.433752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.996 [2024-07-25 09:06:09.433864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:1512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.996 [2024-07-25 09:06:09.433988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.996 [2024-07-25 09:06:09.434106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:1520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.996 [2024-07-25 09:06:09.434221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.996 [2024-07-25 09:06:09.434326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.996 [2024-07-25 09:06:09.434431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.996 [2024-07-25 09:06:09.434545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.996 [2024-07-25 09:06:09.434657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.996 [2024-07-25 09:06:09.434766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.996 [2024-07-25 09:06:09.434891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.996 [2024-07-25 09:06:09.435000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.996 [2024-07-25 09:06:09.435125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.996 [2024-07-25 09:06:09.435229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:1560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.996 [2024-07-25 09:06:09.435337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.996 [2024-07-25 09:06:09.435458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.996 [2024-07-25 09:06:09.435568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.996 [2024-07-25 09:06:09.435682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:1576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.996 [2024-07-25 09:06:09.435799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.996 [2024-07-25 09:06:09.435934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:1584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.996 [2024-07-25 09:06:09.436075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.996 [2024-07-25 09:06:09.436190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.996 [2024-07-25 09:06:09.436293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.996 [2024-07-25 09:06:09.436397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:1600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.997 [2024-07-25 09:06:09.436500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.997 [2024-07-25 09:06:09.436608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:1032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.997 [2024-07-25 09:06:09.436730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.997 [2024-07-25 09:06:09.436851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:1040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.997 [2024-07-25 09:06:09.436974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.997 [2024-07-25 09:06:09.437074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:1048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.997 [2024-07-25 09:06:09.437183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.997 [2024-07-25 09:06:09.437275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:1056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.997 [2024-07-25 09:06:09.437390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.997 [2024-07-25 09:06:09.437498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:1064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.997 [2024-07-25 09:06:09.437611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.997 [2024-07-25 09:06:09.437714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:1072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.997 [2024-07-25 09:06:09.437847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.997 [2024-07-25 09:06:09.437956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:1080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.997 [2024-07-25 09:06:09.438071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.997 [2024-07-25 09:06:09.438174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:1088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.997 [2024-07-25 09:06:09.438322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.997 [2024-07-25 09:06:09.438419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:1608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.997 [2024-07-25 09:06:09.438548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.997 [2024-07-25 09:06:09.438658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.997 [2024-07-25 09:06:09.438771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.997 [2024-07-25 09:06:09.438916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:1624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.997 [2024-07-25 09:06:09.439049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.997 [2024-07-25 09:06:09.439155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:1632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.997 [2024-07-25 09:06:09.439268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.997 [2024-07-25 09:06:09.439373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.997 [2024-07-25 09:06:09.439482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.997 [2024-07-25 09:06:09.439599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.997 [2024-07-25 09:06:09.439723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.997 [2024-07-25 09:06:09.439850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:1656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.997 [2024-07-25 09:06:09.439994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.997 [2024-07-25 09:06:09.440106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:1664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.997 [2024-07-25 09:06:09.440227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.997 [2024-07-25 09:06:09.440320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:1672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.997 [2024-07-25 09:06:09.440425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.997 [2024-07-25 09:06:09.440529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:1680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.997 [2024-07-25 09:06:09.440639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.997 [2024-07-25 09:06:09.440731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:1688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.997 [2024-07-25 09:06:09.440859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.997 [2024-07-25 09:06:09.440973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:1696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.997 [2024-07-25 09:06:09.441086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.997 [2024-07-25 09:06:09.441179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:1704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.997 [2024-07-25 09:06:09.441291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.997 [2024-07-25 09:06:09.441394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:1712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.997 [2024-07-25 09:06:09.441502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.997 [2024-07-25 09:06:09.441608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:1720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.997 [2024-07-25 09:06:09.441712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.997 [2024-07-25 09:06:09.441843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:1728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.997 [2024-07-25 09:06:09.441962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.997 [2024-07-25 09:06:09.442068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.997 [2024-07-25 09:06:09.442173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.997 [2024-07-25 09:06:09.442287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.997 [2024-07-25 09:06:09.442421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.997 [2024-07-25 09:06:09.442526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:1112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.997 [2024-07-25 09:06:09.442640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.997 [2024-07-25 09:06:09.442745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:1120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.997 [2024-07-25 09:06:09.442903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.997 [2024-07-25 09:06:09.443017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:1128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.997 [2024-07-25 09:06:09.443127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.997 [2024-07-25 09:06:09.443231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.997 [2024-07-25 09:06:09.443345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.997 [2024-07-25 09:06:09.443458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:1144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.997 [2024-07-25 09:06:09.443563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.997 [2024-07-25 09:06:09.443667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:1152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.997 [2024-07-25 09:06:09.443769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.997 [2024-07-25 09:06:09.443895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:1736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.997 [2024-07-25 09:06:09.444028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.997 [2024-07-25 09:06:09.444125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.997 [2024-07-25 09:06:09.444232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.997 [2024-07-25 09:06:09.444360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:1752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.997 [2024-07-25 09:06:09.444470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.997 [2024-07-25 09:06:09.444579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.997 [2024-07-25 09:06:09.444668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.998 [2024-07-25 09:06:09.444772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.998 [2024-07-25 09:06:09.444893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.998 [2024-07-25 09:06:09.444991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.998 [2024-07-25 09:06:09.445113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.998 [2024-07-25 09:06:09.445222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:1784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.998 [2024-07-25 09:06:09.445331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.998 [2024-07-25 09:06:09.445434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:1792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.998 [2024-07-25 09:06:09.445539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.998 [2024-07-25 09:06:09.445642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.998 [2024-07-25 09:06:09.445745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.998 [2024-07-25 09:06:09.445875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.998 [2024-07-25 09:06:09.445979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.998 [2024-07-25 09:06:09.446084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.998 [2024-07-25 09:06:09.446185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.998 [2024-07-25 09:06:09.446277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:1824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.998 [2024-07-25 09:06:09.446381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.998 [2024-07-25 09:06:09.446484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:1832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.998 [2024-07-25 09:06:09.446591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.998 [2024-07-25 09:06:09.446710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:1840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.998 [2024-07-25 09:06:09.446850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.998 [2024-07-25 09:06:09.446964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:1848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.998 [2024-07-25 09:06:09.447071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.998 [2024-07-25 09:06:09.447175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.998 [2024-07-25 09:06:09.447280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.998 [2024-07-25 09:06:09.447386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:1160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.998 [2024-07-25 09:06:09.447495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.998 [2024-07-25 09:06:09.447599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:1168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.998 [2024-07-25 09:06:09.447701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.998 [2024-07-25 09:06:09.447826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:1176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.998 [2024-07-25 09:06:09.447975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.998 [2024-07-25 09:06:09.448106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:1184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.998 [2024-07-25 09:06:09.448218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.998 [2024-07-25 09:06:09.448323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:1192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.998 [2024-07-25 09:06:09.448514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.998 [2024-07-25 09:06:09.448623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:1200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.998 [2024-07-25 09:06:09.448731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.998 [2024-07-25 09:06:09.448862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.998 [2024-07-25 09:06:09.448981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.998 [2024-07-25 09:06:09.449084] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ba00 is same with the state(5) to be set 00:24:13.998 [2024-07-25 09:06:09.449216] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:13.998 [2024-07-25 09:06:09.449319] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:13.998 [2024-07-25 09:06:09.449420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1216 len:8 PRP1 0x0 PRP2 0x0 00:24:13.998 [2024-07-25 09:06:09.449520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.998 [2024-07-25 09:06:09.449628] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:13.998 [2024-07-25 09:06:09.449727] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:13.998 [2024-07-25 09:06:09.449836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1864 len:8 PRP1 0x0 PRP2 0x0 00:24:13.998 [2024-07-25 09:06:09.449965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.998 [2024-07-25 09:06:09.450067] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:13.998 [2024-07-25 09:06:09.450164] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:13.998 [2024-07-25 09:06:09.450256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1872 len:8 PRP1 0x0 PRP2 0x0 00:24:13.998 [2024-07-25 09:06:09.450356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.998 [2024-07-25 09:06:09.450442] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:13.998 [2024-07-25 09:06:09.450535] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:13.998 [2024-07-25 09:06:09.450615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1880 len:8 PRP1 0x0 PRP2 0x0 00:24:13.998 [2024-07-25 09:06:09.450651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.998 [2024-07-25 09:06:09.450680] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:13.998 [2024-07-25 09:06:09.450700] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:13.998 [2024-07-25 09:06:09.450724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:8 PRP1 0x0 PRP2 0x0 00:24:13.998 [2024-07-25 09:06:09.450747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.998 [2024-07-25 09:06:09.450790] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:13.998 [2024-07-25 09:06:09.450826] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:13.998 [2024-07-25 09:06:09.450851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1896 len:8 PRP1 0x0 PRP2 0x0 00:24:13.998 [2024-07-25 09:06:09.450875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.998 [2024-07-25 09:06:09.450900] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:13.998 [2024-07-25 09:06:09.450919] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:13.998 [2024-07-25 09:06:09.450939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1904 len:8 PRP1 0x0 PRP2 0x0 00:24:13.998 [2024-07-25 09:06:09.450962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.998 [2024-07-25 09:06:09.450986] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:13.998 [2024-07-25 09:06:09.451005] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:13.998 [2024-07-25 09:06:09.451024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1912 len:8 PRP1 0x0 PRP2 0x0 00:24:13.998 [2024-07-25 09:06:09.451047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.998 [2024-07-25 09:06:09.451071] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:13.998 [2024-07-25 09:06:09.451090] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:13.998 [2024-07-25 09:06:09.451109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1920 len:8 PRP1 0x0 PRP2 0x0 00:24:13.998 [2024-07-25 09:06:09.451132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.998 [2024-07-25 09:06:09.451450] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002ba00 was disconnected and freed. reset controller. 00:24:13.998 [2024-07-25 09:06:09.451486] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:24:13.998 [2024-07-25 09:06:09.451609] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:13.998 [2024-07-25 09:06:09.451644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.998 [2024-07-25 09:06:09.451672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:13.999 [2024-07-25 09:06:09.451697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.999 [2024-07-25 09:06:09.451722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:13.999 [2024-07-25 09:06:09.451753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.999 [2024-07-25 09:06:09.451777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:13.999 [2024-07-25 09:06:09.451801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.999 [2024-07-25 09:06:09.451846] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:13.999 [2024-07-25 09:06:09.451981] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:24:13.999 [2024-07-25 09:06:09.457702] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:13.999 [2024-07-25 09:06:09.508133] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:13.999 [2024-07-25 09:06:14.161519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:125368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.999 [2024-07-25 09:06:14.162242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.999 [2024-07-25 09:06:14.162410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:125376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.999 [2024-07-25 09:06:14.162560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.999 [2024-07-25 09:06:14.162699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:125384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.999 [2024-07-25 09:06:14.162861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.999 [2024-07-25 09:06:14.162990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:125392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.999 [2024-07-25 09:06:14.163142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.999 [2024-07-25 09:06:14.163264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:125400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.999 [2024-07-25 09:06:14.163407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.999 [2024-07-25 09:06:14.163528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:125408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.999 [2024-07-25 09:06:14.163658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.999 [2024-07-25 09:06:14.163778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:125416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.999 [2024-07-25 09:06:14.163956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.999 [2024-07-25 09:06:14.164112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:125424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.999 [2024-07-25 09:06:14.164268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.999 [2024-07-25 09:06:14.164412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:125432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.999 [2024-07-25 09:06:14.164563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.999 [2024-07-25 09:06:14.164682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:125440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.999 [2024-07-25 09:06:14.164831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.999 [2024-07-25 09:06:14.164978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:125448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.999 [2024-07-25 09:06:14.165125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.999 [2024-07-25 09:06:14.165258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:125456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.999 [2024-07-25 09:06:14.165393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.999 [2024-07-25 09:06:14.165562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:125464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.999 [2024-07-25 09:06:14.165691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.999 [2024-07-25 09:06:14.165840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:125472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.999 [2024-07-25 09:06:14.165965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.999 [2024-07-25 09:06:14.166102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:125480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.999 [2024-07-25 09:06:14.166239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.999 [2024-07-25 09:06:14.166359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:125488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.999 [2024-07-25 09:06:14.166494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.999 [2024-07-25 09:06:14.166626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:125496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.999 [2024-07-25 09:06:14.166754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.999 [2024-07-25 09:06:14.166895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:125504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.999 [2024-07-25 09:06:14.167037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.999 [2024-07-25 09:06:14.167182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:125512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.999 [2024-07-25 09:06:14.167318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.999 [2024-07-25 09:06:14.167438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:125520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.999 [2024-07-25 09:06:14.167552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.999 [2024-07-25 09:06:14.167669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:124984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.999 [2024-07-25 09:06:14.167796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.999 [2024-07-25 09:06:14.167975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:124992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.999 [2024-07-25 09:06:14.170172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.999 [2024-07-25 09:06:14.170329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:125000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.999 [2024-07-25 09:06:14.170449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.999 [2024-07-25 09:06:14.170582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:125008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.999 [2024-07-25 09:06:14.170711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.999 [2024-07-25 09:06:14.170850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:125016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.999 [2024-07-25 09:06:14.171028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.999 [2024-07-25 09:06:14.171167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:125024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.999 [2024-07-25 09:06:14.171310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.999 [2024-07-25 09:06:14.171365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:125032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.999 [2024-07-25 09:06:14.171402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.999 [2024-07-25 09:06:14.171440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:125040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.999 [2024-07-25 09:06:14.171474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.999 [2024-07-25 09:06:14.171512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:125048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.999 [2024-07-25 09:06:14.171546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.999 [2024-07-25 09:06:14.171584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:125056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.999 [2024-07-25 09:06:14.171618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.999 [2024-07-25 09:06:14.171655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:125064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.999 [2024-07-25 09:06:14.171689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.999 [2024-07-25 09:06:14.171726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:125072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.999 [2024-07-25 09:06:14.171760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.999 [2024-07-25 09:06:14.171797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:125080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.999 [2024-07-25 09:06:14.171861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.000 [2024-07-25 09:06:14.171903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:125088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.000 [2024-07-25 09:06:14.171938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.000 [2024-07-25 09:06:14.171994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:125096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.000 [2024-07-25 09:06:14.172028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.000 [2024-07-25 09:06:14.172066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:125104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.000 [2024-07-25 09:06:14.172100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.000 [2024-07-25 09:06:14.172138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:125528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.000 [2024-07-25 09:06:14.172172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.000 [2024-07-25 09:06:14.172231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:125536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.000 [2024-07-25 09:06:14.172311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.000 [2024-07-25 09:06:14.172353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:125544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.000 [2024-07-25 09:06:14.172388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.000 [2024-07-25 09:06:14.172426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:125552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.000 [2024-07-25 09:06:14.172460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.000 [2024-07-25 09:06:14.172498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:125560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.000 [2024-07-25 09:06:14.172532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.000 [2024-07-25 09:06:14.172570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:125568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.000 [2024-07-25 09:06:14.172603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.000 [2024-07-25 09:06:14.172641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:125576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.000 [2024-07-25 09:06:14.172675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.000 [2024-07-25 09:06:14.172712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:125584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.000 [2024-07-25 09:06:14.172747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.000 [2024-07-25 09:06:14.172784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:125592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.000 [2024-07-25 09:06:14.172833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.000 [2024-07-25 09:06:14.172875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:125600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.000 [2024-07-25 09:06:14.172910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.000 [2024-07-25 09:06:14.172948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:125608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.000 [2024-07-25 09:06:14.172981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.000 [2024-07-25 09:06:14.173018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:125616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.000 [2024-07-25 09:06:14.173052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.000 [2024-07-25 09:06:14.173089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:125112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.000 [2024-07-25 09:06:14.173123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.000 [2024-07-25 09:06:14.173161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:125120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.000 [2024-07-25 09:06:14.173194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.000 [2024-07-25 09:06:14.173245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:125128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.000 [2024-07-25 09:06:14.173280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.000 [2024-07-25 09:06:14.173318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:125136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.000 [2024-07-25 09:06:14.173351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.000 [2024-07-25 09:06:14.173388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:125144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.000 [2024-07-25 09:06:14.173421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.000 [2024-07-25 09:06:14.173459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:125152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.000 [2024-07-25 09:06:14.173493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.000 [2024-07-25 09:06:14.173530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:125160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.000 [2024-07-25 09:06:14.173563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.000 [2024-07-25 09:06:14.173600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:125168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.000 [2024-07-25 09:06:14.173633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.000 [2024-07-25 09:06:14.173670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:125176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.000 [2024-07-25 09:06:14.173704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.000 [2024-07-25 09:06:14.173741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:125184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.000 [2024-07-25 09:06:14.173775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.000 [2024-07-25 09:06:14.173828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:125192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.000 [2024-07-25 09:06:14.173865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.000 [2024-07-25 09:06:14.173903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:125200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.000 [2024-07-25 09:06:14.173937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.000 [2024-07-25 09:06:14.173974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:125208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.000 [2024-07-25 09:06:14.174007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.000 [2024-07-25 09:06:14.174045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:125216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.000 [2024-07-25 09:06:14.174078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.000 [2024-07-25 09:06:14.174116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:125224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.000 [2024-07-25 09:06:14.174161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.000 [2024-07-25 09:06:14.174199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:125232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.000 [2024-07-25 09:06:14.174233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.000 [2024-07-25 09:06:14.174270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:125624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.000 [2024-07-25 09:06:14.174304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.000 [2024-07-25 09:06:14.174341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:125632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.000 [2024-07-25 09:06:14.174374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.001 [2024-07-25 09:06:14.174411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:125640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.001 [2024-07-25 09:06:14.174445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.001 [2024-07-25 09:06:14.174483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:125648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.001 [2024-07-25 09:06:14.174527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.001 [2024-07-25 09:06:14.174581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:125656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.001 [2024-07-25 09:06:14.174619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.001 [2024-07-25 09:06:14.174657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:125664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.001 [2024-07-25 09:06:14.174690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.001 [2024-07-25 09:06:14.174728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:125672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.001 [2024-07-25 09:06:14.174761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.001 [2024-07-25 09:06:14.174798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:125680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.001 [2024-07-25 09:06:14.174857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.001 [2024-07-25 09:06:14.174898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:125688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.001 [2024-07-25 09:06:14.174932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.001 [2024-07-25 09:06:14.174969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:125696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.001 [2024-07-25 09:06:14.175003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.001 [2024-07-25 09:06:14.175040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:125704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.001 [2024-07-25 09:06:14.175074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.001 [2024-07-25 09:06:14.175125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:125712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.001 [2024-07-25 09:06:14.175160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.001 [2024-07-25 09:06:14.175197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:125720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.001 [2024-07-25 09:06:14.175231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.001 [2024-07-25 09:06:14.175268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:125728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.001 [2024-07-25 09:06:14.175302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.001 [2024-07-25 09:06:14.175351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:125736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.001 [2024-07-25 09:06:14.175384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.001 [2024-07-25 09:06:14.175422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:125744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.001 [2024-07-25 09:06:14.175456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.001 [2024-07-25 09:06:14.175493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:125240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.001 [2024-07-25 09:06:14.175527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.001 [2024-07-25 09:06:14.175564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:125248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.001 [2024-07-25 09:06:14.175618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.001 [2024-07-25 09:06:14.175657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:125256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.001 [2024-07-25 09:06:14.175692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.001 [2024-07-25 09:06:14.175730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:125264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.001 [2024-07-25 09:06:14.175764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.001 [2024-07-25 09:06:14.175801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:125272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.001 [2024-07-25 09:06:14.175856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.001 [2024-07-25 09:06:14.175897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:125280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.001 [2024-07-25 09:06:14.175931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.001 [2024-07-25 09:06:14.175984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:125288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.001 [2024-07-25 09:06:14.176019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.001 [2024-07-25 09:06:14.176056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:125296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.001 [2024-07-25 09:06:14.176103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.001 [2024-07-25 09:06:14.176143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:125752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.001 [2024-07-25 09:06:14.176177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.001 [2024-07-25 09:06:14.176215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:125760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.001 [2024-07-25 09:06:14.176248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.001 [2024-07-25 09:06:14.176286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:125768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.001 [2024-07-25 09:06:14.176320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.001 [2024-07-25 09:06:14.176357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:125776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.001 [2024-07-25 09:06:14.176390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.001 [2024-07-25 09:06:14.176427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:125784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.001 [2024-07-25 09:06:14.176460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.001 [2024-07-25 09:06:14.176499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:125792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.001 [2024-07-25 09:06:14.176532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.001 [2024-07-25 09:06:14.176569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.001 [2024-07-25 09:06:14.176602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.001 [2024-07-25 09:06:14.176639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:125808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.001 [2024-07-25 09:06:14.176673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.001 [2024-07-25 09:06:14.176720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:125816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.001 [2024-07-25 09:06:14.176755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.001 [2024-07-25 09:06:14.176791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:125824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.001 [2024-07-25 09:06:14.176842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.001 [2024-07-25 09:06:14.176882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:125832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.001 [2024-07-25 09:06:14.176918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.001 [2024-07-25 09:06:14.176955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:125840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.001 [2024-07-25 09:06:14.176988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.001 [2024-07-25 09:06:14.177039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:125848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.001 [2024-07-25 09:06:14.177075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.001 [2024-07-25 09:06:14.177112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:125856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.002 [2024-07-25 09:06:14.177146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.002 [2024-07-25 09:06:14.177183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:125864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.002 [2024-07-25 09:06:14.177217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.002 [2024-07-25 09:06:14.177254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:125872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.002 [2024-07-25 09:06:14.177288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.002 [2024-07-25 09:06:14.177326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:125304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.002 [2024-07-25 09:06:14.177359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.002 [2024-07-25 09:06:14.177397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:125312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.002 [2024-07-25 09:06:14.177431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.002 [2024-07-25 09:06:14.177468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:125320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.002 [2024-07-25 09:06:14.177502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.002 [2024-07-25 09:06:14.177539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:125328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.002 [2024-07-25 09:06:14.177573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.002 [2024-07-25 09:06:14.177610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:125336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.002 [2024-07-25 09:06:14.177643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.002 [2024-07-25 09:06:14.177681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:125344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.002 [2024-07-25 09:06:14.177714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.002 [2024-07-25 09:06:14.177751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:125352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.002 [2024-07-25 09:06:14.177784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.002 [2024-07-25 09:06:14.177836] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002c180 is same with the state(5) to be set 00:24:14.002 [2024-07-25 09:06:14.177884] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:14.002 [2024-07-25 09:06:14.177917] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:14.002 [2024-07-25 09:06:14.177947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:125360 len:8 PRP1 0x0 PRP2 0x0 00:24:14.002 [2024-07-25 09:06:14.177992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.002 [2024-07-25 09:06:14.178029] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:14.002 [2024-07-25 09:06:14.178055] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:14.002 [2024-07-25 09:06:14.178081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125880 len:8 PRP1 0x0 PRP2 0x0 00:24:14.002 [2024-07-25 09:06:14.178113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.002 [2024-07-25 09:06:14.178145] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:14.002 [2024-07-25 09:06:14.178170] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:14.002 [2024-07-25 09:06:14.178196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125888 len:8 PRP1 0x0 PRP2 0x0 00:24:14.002 [2024-07-25 09:06:14.178228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.002 [2024-07-25 09:06:14.178259] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:14.002 [2024-07-25 09:06:14.178284] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:14.002 [2024-07-25 09:06:14.178310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125896 len:8 PRP1 0x0 PRP2 0x0 00:24:14.002 [2024-07-25 09:06:14.178341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.002 [2024-07-25 09:06:14.178373] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:14.002 [2024-07-25 09:06:14.178397] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:14.002 [2024-07-25 09:06:14.178424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125904 len:8 PRP1 0x0 PRP2 0x0 00:24:14.002 [2024-07-25 09:06:14.178456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.002 [2024-07-25 09:06:14.178488] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:14.002 [2024-07-25 09:06:14.178513] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:14.002 [2024-07-25 09:06:14.178541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125912 len:8 PRP1 0x0 PRP2 0x0 00:24:14.002 [2024-07-25 09:06:14.178573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.002 [2024-07-25 09:06:14.178605] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:14.002 [2024-07-25 09:06:14.178630] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:14.002 [2024-07-25 09:06:14.178656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125920 len:8 PRP1 0x0 PRP2 0x0 00:24:14.002 [2024-07-25 09:06:14.178687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.002 [2024-07-25 09:06:14.178718] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:14.002 [2024-07-25 09:06:14.178743] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:14.002 [2024-07-25 09:06:14.178770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125928 len:8 PRP1 0x0 PRP2 0x0 00:24:14.002 [2024-07-25 09:06:14.178801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.002 [2024-07-25 09:06:14.178851] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:14.002 [2024-07-25 09:06:14.178884] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:14.002 [2024-07-25 09:06:14.178923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125936 len:8 PRP1 0x0 PRP2 0x0 00:24:14.002 [2024-07-25 09:06:14.178957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.002 [2024-07-25 09:06:14.178990] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:14.002 [2024-07-25 09:06:14.179015] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:14.002 [2024-07-25 09:06:14.179042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125944 len:8 PRP1 0x0 PRP2 0x0 00:24:14.002 [2024-07-25 09:06:14.179073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.002 [2024-07-25 09:06:14.179105] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:14.002 [2024-07-25 09:06:14.179130] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:14.002 [2024-07-25 09:06:14.179156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125952 len:8 PRP1 0x0 PRP2 0x0 00:24:14.002 [2024-07-25 09:06:14.179188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.002 [2024-07-25 09:06:14.179220] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:14.002 [2024-07-25 09:06:14.179245] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:14.002 [2024-07-25 09:06:14.179271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125960 len:8 PRP1 0x0 PRP2 0x0 00:24:14.002 [2024-07-25 09:06:14.179303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.002 [2024-07-25 09:06:14.179334] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:14.002 [2024-07-25 09:06:14.179359] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:14.002 [2024-07-25 09:06:14.179385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125968 len:8 PRP1 0x0 PRP2 0x0 00:24:14.002 [2024-07-25 09:06:14.179417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.002 [2024-07-25 09:06:14.179449] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:14.002 [2024-07-25 09:06:14.179473] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:14.002 [2024-07-25 09:06:14.179499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125976 len:8 PRP1 0x0 PRP2 0x0 00:24:14.002 [2024-07-25 09:06:14.179531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.002 [2024-07-25 09:06:14.179562] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:14.002 [2024-07-25 09:06:14.179588] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:14.002 [2024-07-25 09:06:14.179614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125984 len:8 PRP1 0x0 PRP2 0x0 00:24:14.002 [2024-07-25 09:06:14.179646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.002 [2024-07-25 09:06:14.179689] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:14.002 [2024-07-25 09:06:14.179727] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:14.002 [2024-07-25 09:06:14.179771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125992 len:8 PRP1 0x0 PRP2 0x0 00:24:14.002 [2024-07-25 09:06:14.179834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.002 [2024-07-25 09:06:14.179886] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:14.003 [2024-07-25 09:06:14.179920] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:14.003 [2024-07-25 09:06:14.179961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:126000 len:8 PRP1 0x0 PRP2 0x0 00:24:14.003 [2024-07-25 09:06:14.179998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.003 [2024-07-25 09:06:14.180401] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002c180 was disconnected and freed. reset controller. 00:24:14.003 [2024-07-25 09:06:14.180445] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:24:14.003 [2024-07-25 09:06:14.180587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:14.003 [2024-07-25 09:06:14.180631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.003 [2024-07-25 09:06:14.180667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:14.003 [2024-07-25 09:06:14.180701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.003 [2024-07-25 09:06:14.180734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:14.003 [2024-07-25 09:06:14.180767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.003 [2024-07-25 09:06:14.180801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:14.003 [2024-07-25 09:06:14.180855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.003 [2024-07-25 09:06:14.180890] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:14.003 [2024-07-25 09:06:14.181017] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:24:14.003 [2024-07-25 09:06:14.186125] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:14.003 [2024-07-25 09:06:14.238288] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:14.003 00:24:14.003 Latency(us) 00:24:14.003 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:14.003 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:14.003 Verification LBA range: start 0x0 length 0x4000 00:24:14.003 NVMe0n1 : 15.01 6610.14 25.82 266.81 0.00 18574.77 796.86 40036.54 00:24:14.003 =================================================================================================================== 00:24:14.003 Total : 6610.14 25.82 266.81 0.00 18574.77 796.86 40036.54 00:24:14.003 Received shutdown signal, test time was about 15.000000 seconds 00:24:14.003 00:24:14.003 Latency(us) 00:24:14.003 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:14.003 =================================================================================================================== 00:24:14.003 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:14.003 09:06:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:24:14.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:14.003 09:06:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:24:14.003 09:06:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:24:14.003 09:06:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=81607 00:24:14.003 09:06:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 81607 /var/tmp/bdevperf.sock 00:24:14.003 09:06:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:24:14.003 09:06:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 81607 ']' 00:24:14.003 09:06:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:14.003 09:06:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:14.003 09:06:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:14.003 09:06:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:14.003 09:06:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:14.945 09:06:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:14.946 09:06:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:24:14.946 09:06:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:14.946 [2024-07-25 09:06:22.008581] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:14.946 09:06:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:15.206 [2024-07-25 09:06:22.296952] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:15.206 09:06:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:15.772 NVMe0n1 00:24:15.772 09:06:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:16.030 00:24:16.030 09:06:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:16.288 00:24:16.288 09:06:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:16.288 09:06:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:24:16.545 09:06:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:17.112 09:06:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:24:20.398 09:06:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:20.398 09:06:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:24:20.398 09:06:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=81685 00:24:20.398 09:06:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 81685 00:24:20.398 09:06:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:21.334 0 00:24:21.334 09:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:21.334 [2024-07-25 09:06:20.905183] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:24:21.334 [2024-07-25 09:06:20.905377] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81607 ] 00:24:21.334 [2024-07-25 09:06:21.081631] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:21.334 [2024-07-25 09:06:21.354501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:21.334 [2024-07-25 09:06:21.557462] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:24:21.334 [2024-07-25 09:06:23.900950] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:21.334 [2024-07-25 09:06:23.901123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.334 [2024-07-25 09:06:23.901162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.334 [2024-07-25 09:06:23.901190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.334 [2024-07-25 09:06:23.901214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.334 [2024-07-25 09:06:23.901235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.334 [2024-07-25 09:06:23.901268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.334 [2024-07-25 09:06:23.901293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.334 [2024-07-25 09:06:23.901329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.334 [2024-07-25 09:06:23.901356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:21.334 [2024-07-25 09:06:23.901442] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:21.334 [2024-07-25 09:06:23.901502] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:24:21.334 [2024-07-25 09:06:23.912859] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:21.334 Running I/O for 1 seconds... 00:24:21.334 00:24:21.334 Latency(us) 00:24:21.334 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:21.334 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:21.334 Verification LBA range: start 0x0 length 0x4000 00:24:21.334 NVMe0n1 : 1.01 5532.77 21.61 0.00 0.00 22978.17 1697.98 31218.97 00:24:21.334 =================================================================================================================== 00:24:21.334 Total : 5532.77 21.61 0.00 0.00 22978.17 1697.98 31218.97 00:24:21.334 09:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:21.334 09:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:24:21.592 09:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:21.849 09:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:21.849 09:06:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:24:22.107 09:06:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:22.364 09:06:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:24:25.669 09:06:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:25.669 09:06:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:24:25.669 09:06:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 81607 00:24:25.669 09:06:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 81607 ']' 00:24:25.669 09:06:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 81607 00:24:25.669 09:06:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:24:25.669 09:06:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:25.669 09:06:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81607 00:24:25.669 killing process with pid 81607 00:24:25.669 09:06:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:25.669 09:06:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:25.669 09:06:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81607' 00:24:25.669 09:06:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 81607 00:24:25.669 09:06:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 81607 00:24:27.041 09:06:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:24:27.042 09:06:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:27.042 09:06:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:24:27.042 09:06:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:27.042 09:06:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:24:27.042 09:06:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:27.042 09:06:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:24:27.042 09:06:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:27.042 09:06:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:24:27.042 09:06:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:27.042 09:06:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:27.042 rmmod nvme_tcp 00:24:27.042 rmmod nvme_fabrics 00:24:27.042 rmmod nvme_keyring 00:24:27.042 09:06:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:27.042 09:06:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:24:27.042 09:06:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:24:27.042 09:06:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 81342 ']' 00:24:27.042 09:06:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 81342 00:24:27.042 09:06:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 81342 ']' 00:24:27.042 09:06:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 81342 00:24:27.042 09:06:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:24:27.042 09:06:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:27.042 09:06:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81342 00:24:27.042 killing process with pid 81342 00:24:27.042 09:06:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:27.042 09:06:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:27.042 09:06:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81342' 00:24:27.042 09:06:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 81342 00:24:27.042 09:06:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 81342 00:24:28.418 09:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:28.418 09:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:28.418 09:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:28.418 09:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:28.418 09:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:28.418 09:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:28.418 09:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:28.418 09:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:28.677 09:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:28.677 00:24:28.677 real 0m36.362s 00:24:28.677 user 2m18.946s 00:24:28.677 sys 0m5.822s 00:24:28.677 09:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:28.677 09:06:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:28.677 ************************************ 00:24:28.677 END TEST nvmf_failover 00:24:28.677 ************************************ 00:24:28.677 09:06:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:28.677 09:06:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:28.677 09:06:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:28.677 09:06:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.677 ************************************ 00:24:28.677 START TEST nvmf_host_discovery 00:24:28.677 ************************************ 00:24:28.677 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:28.677 * Looking for test storage... 00:24:28.677 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:28.677 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:28.677 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:24:28.677 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:28.677 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:28.677 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:28.677 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:28.677 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:28.677 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:28.677 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:28.677 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:28.677 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:28.677 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:28.677 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:24:28.677 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:24:28.677 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:28.677 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:28.677 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:28.677 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:28.677 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:28.677 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:28.677 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:28.677 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:28.677 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.677 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.677 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.677 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:24:28.677 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.677 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:24:28.677 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:28.677 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:28.677 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:28.677 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:28.677 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:28.677 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:28.677 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:28.677 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:28.677 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:24:28.677 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:24:28.677 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:24:28.677 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:24:28.677 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:24:28.677 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:24:28.677 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:24:28.677 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:28.677 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:28.677 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:28.677 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:28.677 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:28.677 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:28.677 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:28.677 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:28.677 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:24:28.677 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:24:28.677 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:24:28.677 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:24:28.677 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:24:28.677 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:24:28.677 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:28.677 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:28.677 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:28.677 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:24:28.677 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:28.677 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:28.677 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:28.677 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:28.677 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:28.678 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:28.678 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:28.678 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:28.678 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:24:28.678 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:24:28.678 Cannot find device "nvmf_tgt_br" 00:24:28.678 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # true 00:24:28.678 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:24:28.678 Cannot find device "nvmf_tgt_br2" 00:24:28.678 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # true 00:24:28.678 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:24:28.678 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:24:28.678 Cannot find device "nvmf_tgt_br" 00:24:28.678 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # true 00:24:28.678 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:24:28.678 Cannot find device "nvmf_tgt_br2" 00:24:28.678 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # true 00:24:28.678 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:24:28.935 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:24:28.935 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:28.935 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:28.935 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:24:28.935 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:28.935 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:28.935 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:24:28.935 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:24:28.935 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:28.935 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:28.935 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:28.935 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:28.935 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:28.935 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:28.935 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:28.935 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:28.935 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:24:28.935 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:24:28.935 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:24:28.935 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:24:28.935 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:28.935 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:28.935 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:28.935 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:24:28.935 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:24:28.935 09:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:28.935 09:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:28.935 09:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:28.936 09:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:28.936 09:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:29.194 09:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:29.194 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:29.194 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.122 ms 00:24:29.194 00:24:29.194 --- 10.0.0.2 ping statistics --- 00:24:29.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:29.194 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:24:29.194 09:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:29.194 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:29.194 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:24:29.194 00:24:29.194 --- 10.0.0.3 ping statistics --- 00:24:29.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:29.194 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:24:29.194 09:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:29.194 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:29.194 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:24:29.194 00:24:29.194 --- 10.0.0.1 ping statistics --- 00:24:29.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:29.194 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:24:29.194 09:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:29.194 09:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@433 -- # return 0 00:24:29.194 09:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:29.194 09:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:29.194 09:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:29.194 09:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:29.194 09:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:29.194 09:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:29.194 09:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:29.194 09:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:24:29.194 09:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:29.194 09:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:29.194 09:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:29.194 09:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=81972 00:24:29.194 09:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 81972 00:24:29.194 09:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:29.194 09:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 81972 ']' 00:24:29.194 09:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:29.194 09:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:29.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:29.194 09:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:29.194 09:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:29.194 09:06:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:29.194 [2024-07-25 09:06:36.220763] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:24:29.194 [2024-07-25 09:06:36.221208] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:29.460 [2024-07-25 09:06:36.402362] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:29.745 [2024-07-25 09:06:36.643016] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:29.745 [2024-07-25 09:06:36.643514] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:29.745 [2024-07-25 09:06:36.643553] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:29.745 [2024-07-25 09:06:36.643571] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:29.745 [2024-07-25 09:06:36.643584] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:29.745 [2024-07-25 09:06:36.643634] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:29.745 [2024-07-25 09:06:36.847869] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:24:30.312 09:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:30.312 09:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:24:30.313 09:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:30.313 09:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:30.313 09:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:30.313 09:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:30.313 09:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:30.313 09:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.313 09:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:30.313 [2024-07-25 09:06:37.196924] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:30.313 09:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.313 09:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:24:30.313 09:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.313 09:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:30.313 [2024-07-25 09:06:37.205085] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:30.313 09:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.313 09:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:24:30.313 09:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.313 09:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:30.313 null0 00:24:30.313 09:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.313 09:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:24:30.313 09:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.313 09:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:30.313 null1 00:24:30.313 09:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.313 09:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:24:30.313 09:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.313 09:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:30.313 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:30.313 09:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.313 09:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=82003 00:24:30.313 09:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:24:30.313 09:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 82003 /tmp/host.sock 00:24:30.313 09:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 82003 ']' 00:24:30.313 09:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:24:30.313 09:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:30.313 09:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:30.313 09:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:30.313 09:06:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:30.313 [2024-07-25 09:06:37.337643] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:24:30.313 [2024-07-25 09:06:37.337791] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82003 ] 00:24:30.572 [2024-07-25 09:06:37.499709] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:30.831 [2024-07-25 09:06:37.793904] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:31.089 [2024-07-25 09:06:37.996345] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:24:31.348 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:31.348 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:24:31.348 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:31.348 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:24:31.348 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.348 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:31.348 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.348 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:24:31.348 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.348 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:31.348 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.348 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:24:31.348 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:24:31.348 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:31.348 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:31.348 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.348 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:31.348 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:31.348 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:31.348 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.348 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:24:31.348 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:24:31.348 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:31.348 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:31.348 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:31.348 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:31.348 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.349 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:31.349 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.349 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:24:31.349 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:24:31.349 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.349 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:31.349 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.349 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:24:31.349 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:31.349 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.349 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:31.349 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:31.349 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:31.349 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:31.349 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.349 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:24:31.349 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:24:31.349 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:31.349 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:31.349 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.349 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:31.349 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:31.349 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:31.349 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.608 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:24:31.608 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:24:31.608 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.608 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:31.608 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.608 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:24:31.608 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:31.608 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.608 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:31.608 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:31.608 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:31.608 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:31.608 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.608 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:24:31.608 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:24:31.608 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:31.608 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.608 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:31.608 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:31.608 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:31.608 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:31.608 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.608 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:24:31.608 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:31.608 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.608 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:31.608 [2024-07-25 09:06:38.609531] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:31.608 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.608 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:24:31.608 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:31.608 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:31.608 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.608 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:31.608 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:31.608 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:31.608 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.608 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:24:31.608 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:24:31.608 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:31.608 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:31.608 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:31.608 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.608 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:31.608 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:31.608 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.608 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:24:31.608 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:24:31.608 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:31.608 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:31.608 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:31.608 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:31.608 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:31.608 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:31.608 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:24:31.608 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:31.608 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:31.608 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.608 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:31.867 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.867 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:31.867 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:24:31.867 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:24:31.867 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:31.867 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:24:31.868 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.868 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:31.868 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.868 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:31.868 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:31.868 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:31.868 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:31.868 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:31.868 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:24:31.868 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:31.868 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.868 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:31.868 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:31.868 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:31.868 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:31.868 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.868 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:24:31.868 09:06:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:24:32.436 [2024-07-25 09:06:39.271810] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:32.436 [2024-07-25 09:06:39.271894] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:32.436 [2024-07-25 09:06:39.271960] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:32.436 [2024-07-25 09:06:39.278462] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:32.436 [2024-07-25 09:06:39.344334] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:32.436 [2024-07-25 09:06:39.344389] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:33.002 09:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:33.002 09:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:33.002 09:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:24:33.002 09:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:33.002 09:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.002 09:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:33.002 09:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:33.002 09:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:33.002 09:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:33.002 09:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.002 09:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:33.002 09:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:33.002 09:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:33.002 09:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:33.002 09:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:33.002 09:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:33.002 09:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:24:33.002 09:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:24:33.002 09:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:33.002 09:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:33.002 09:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:33.002 09:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.002 09:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:33.002 09:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:33.002 09:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.002 09:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:24:33.002 09:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:33.002 09:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:33.002 09:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:33.002 09:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:33.002 09:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:33.003 09:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:24:33.003 09:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:24:33.003 09:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:33.003 09:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.003 09:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:33.003 09:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:33.003 09:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:33.003 09:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:33.003 09:06:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.003 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:24:33.003 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:33.003 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:24:33.003 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:33.003 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:33.003 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:33.003 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:33.003 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:33.003 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:33.003 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:24:33.003 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:33.003 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:33.003 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.003 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:33.003 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.003 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:33.003 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:24:33.003 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:24:33.003 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:33.003 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:24:33.003 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.003 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:33.003 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.003 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:33.003 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:33.003 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:33.003 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:33.003 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:33.003 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:24:33.003 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:33.003 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:33.003 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.003 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:33.003 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:33.003 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:33.003 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.262 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:33.262 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:33.262 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:24:33.262 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:33.262 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:33.262 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:33.262 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:33.262 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:33.262 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:33.262 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:24:33.262 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:24:33.262 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:33.262 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.262 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:33.262 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.262 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:33.262 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:33.262 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:24:33.262 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:33.262 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:24:33.262 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.262 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:33.262 [2024-07-25 09:06:40.195757] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:33.262 [2024-07-25 09:06:40.196112] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:33.262 [2024-07-25 09:06:40.196171] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:33.262 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.262 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:33.262 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:33.262 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:33.262 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:33.262 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:33.262 [2024-07-25 09:06:40.202115] bdev_nvme.c:6935:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:24:33.262 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:24:33.262 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:33.262 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:33.262 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.262 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:33.262 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:33.262 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:33.262 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.262 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:33.262 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:33.262 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:33.262 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:33.262 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:33.262 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:33.262 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:33.262 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:24:33.262 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:33.262 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:33.262 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:33.262 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:33.262 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.262 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:33.262 [2024-07-25 09:06:40.264790] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:33.262 [2024-07-25 09:06:40.264845] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:33.262 [2024-07-25 09:06:40.264869] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:33.263 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.263 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:33.263 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:33.263 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:33.263 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:33.263 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:33.263 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:33.263 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:33.263 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:24:33.263 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:33.263 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.263 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:33.263 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:33.263 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:33.263 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:33.263 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.263 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:24:33.263 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:33.263 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:24:33.263 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:33.263 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:33.263 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:33.263 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:33.263 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:33.263 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:33.263 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:24:33.263 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:33.263 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:33.263 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.263 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:33.263 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.522 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:33.522 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:33.522 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:24:33.522 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:33.522 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:33.522 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.522 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:33.522 [2024-07-25 09:06:40.412986] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:33.522 [2024-07-25 09:06:40.413041] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:33.522 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.522 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:33.522 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:33.522 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:33.522 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:33.522 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:33.522 [2024-07-25 09:06:40.419004] bdev_nvme.c:6798:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:24:33.522 [2024-07-25 09:06:40.419064] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:33.522 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:24:33.522 [2024-07-25 09:06:40.419247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:33.522 [2024-07-25 09:06:40.419313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.522 [2024-07-25 09:06:40.419335] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:33.522 [2024-07-25 09:06:40.419350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.522 [2024-07-25 09:06:40.419364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:33.522 [2024-07-25 09:06:40.419378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.522 [2024-07-25 09:06:40.419393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:33.522 [2024-07-25 09:06:40.419407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.522 [2024-07-25 09:06:40.419420] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(5) to be set 00:24:33.522 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:33.522 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:33.522 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:33.522 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.522 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:33.522 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:33.522 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.522 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:33.522 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:33.522 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:33.522 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:33.522 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:33.522 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:33.522 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:33.523 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:24:33.523 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:33.523 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:33.523 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.523 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:33.523 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:33.523 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:33.523 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.523 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:33.523 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:33.523 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:33.523 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:33.523 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:33.523 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:33.523 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:24:33.523 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:24:33.523 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:33.523 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:33.523 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.523 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:33.523 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:33.523 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:33.523 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.523 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:24:33.523 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:33.523 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:24:33.523 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:33.523 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:33.523 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:33.523 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:33.523 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:33.523 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:33.523 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:24:33.523 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:33.523 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:33.523 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.523 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:33.523 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.523 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:33.523 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:33.523 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:24:33.523 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:33.523 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:24:33.523 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.523 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:33.523 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.523 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:24:33.523 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:24:33.523 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:33.523 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:33.523 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:24:33.783 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:24:33.783 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:33.783 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.783 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:33.783 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:33.783 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:33.783 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:33.783 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.783 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:24:33.783 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:33.783 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:24:33.783 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:24:33.783 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:33.783 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:33.783 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:24:33.783 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:24:33.783 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:33.783 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:33.783 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.783 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:33.783 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:33.783 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:33.783 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.783 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:24:33.783 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:33.783 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:24:33.783 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:24:33.783 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:33.783 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:33.783 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:33.783 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:33.783 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:33.783 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:24:33.783 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:33.783 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.783 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:33.783 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:33.783 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.783 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:24:33.783 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:24:33.783 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:24:33.783 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:33.783 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:33.783 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.783 09:06:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:34.718 [2024-07-25 09:06:41.802827] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:34.718 [2024-07-25 09:06:41.802890] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:34.718 [2024-07-25 09:06:41.802925] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:34.718 [2024-07-25 09:06:41.808939] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:24:34.978 [2024-07-25 09:06:41.879493] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:34.978 [2024-07-25 09:06:41.879557] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:34.978 09:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.978 09:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:34.978 09:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:24:34.978 09:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:34.978 09:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:34.978 09:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:34.978 09:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:34.978 09:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:34.979 09:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:34.979 09:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.979 09:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:34.979 request: 00:24:34.979 { 00:24:34.979 "name": "nvme", 00:24:34.979 "trtype": "tcp", 00:24:34.979 "traddr": "10.0.0.2", 00:24:34.979 "adrfam": "ipv4", 00:24:34.979 "trsvcid": "8009", 00:24:34.979 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:34.979 "wait_for_attach": true, 00:24:34.979 "method": "bdev_nvme_start_discovery", 00:24:34.979 "req_id": 1 00:24:34.979 } 00:24:34.979 Got JSON-RPC error response 00:24:34.979 response: 00:24:34.979 { 00:24:34.979 "code": -17, 00:24:34.979 "message": "File exists" 00:24:34.979 } 00:24:34.979 09:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:34.979 09:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:24:34.979 09:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:34.979 09:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:34.979 09:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:34.979 09:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:24:34.979 09:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:34.979 09:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:34.979 09:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.979 09:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:34.979 09:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:34.979 09:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:34.979 09:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.979 09:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:24:34.979 09:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:24:34.979 09:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:34.979 09:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:34.979 09:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:34.979 09:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.979 09:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:34.979 09:06:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:34.979 09:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.979 09:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:34.979 09:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:34.979 09:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:24:34.979 09:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:34.979 09:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:34.979 09:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:34.979 09:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:34.979 09:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:34.979 09:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:34.979 09:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.979 09:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:34.979 request: 00:24:34.979 { 00:24:34.979 "name": "nvme_second", 00:24:34.979 "trtype": "tcp", 00:24:34.979 "traddr": "10.0.0.2", 00:24:34.979 "adrfam": "ipv4", 00:24:34.979 "trsvcid": "8009", 00:24:34.979 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:34.979 "wait_for_attach": true, 00:24:34.979 "method": "bdev_nvme_start_discovery", 00:24:34.979 "req_id": 1 00:24:34.979 } 00:24:34.979 Got JSON-RPC error response 00:24:34.979 response: 00:24:34.979 { 00:24:34.979 "code": -17, 00:24:34.979 "message": "File exists" 00:24:34.979 } 00:24:34.979 09:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:34.979 09:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:24:34.979 09:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:34.979 09:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:34.979 09:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:34.979 09:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:24:34.979 09:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:34.979 09:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:34.979 09:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:34.979 09:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:34.979 09:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.979 09:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:34.979 09:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.979 09:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:24:35.238 09:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:24:35.238 09:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:35.238 09:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:35.238 09:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.238 09:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:35.238 09:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:35.238 09:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:35.238 09:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.238 09:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:35.238 09:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:35.238 09:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:24:35.238 09:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:35.238 09:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:35.238 09:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:35.238 09:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:35.238 09:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:35.238 09:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:35.238 09:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.238 09:06:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:36.174 [2024-07-25 09:06:43.148445] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.174 [2024-07-25 09:06:43.148559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002bc80 with addr=10.0.0.2, port=8010 00:24:36.174 [2024-07-25 09:06:43.148633] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:36.174 [2024-07-25 09:06:43.148650] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:36.174 [2024-07-25 09:06:43.148666] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:37.110 [2024-07-25 09:06:44.148535] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.110 [2024-07-25 09:06:44.148644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002bf00 with addr=10.0.0.2, port=8010 00:24:37.110 [2024-07-25 09:06:44.148714] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:37.110 [2024-07-25 09:06:44.148731] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:37.110 [2024-07-25 09:06:44.148746] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:38.046 [2024-07-25 09:06:45.148144] bdev_nvme.c:7054:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:24:38.046 request: 00:24:38.046 { 00:24:38.046 "name": "nvme_second", 00:24:38.046 "trtype": "tcp", 00:24:38.046 "traddr": "10.0.0.2", 00:24:38.046 "adrfam": "ipv4", 00:24:38.046 "trsvcid": "8010", 00:24:38.046 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:38.046 "wait_for_attach": false, 00:24:38.046 "attach_timeout_ms": 3000, 00:24:38.046 "method": "bdev_nvme_start_discovery", 00:24:38.046 "req_id": 1 00:24:38.046 } 00:24:38.046 Got JSON-RPC error response 00:24:38.047 response: 00:24:38.047 { 00:24:38.047 "code": -110, 00:24:38.047 "message": "Connection timed out" 00:24:38.047 } 00:24:38.047 09:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:38.047 09:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:24:38.047 09:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:38.047 09:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:38.047 09:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:38.047 09:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:24:38.047 09:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:38.306 09:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:38.306 09:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.306 09:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:38.306 09:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:38.306 09:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:38.306 09:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.306 09:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:24:38.306 09:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:24:38.306 09:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 82003 00:24:38.306 09:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:24:38.306 09:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:38.306 09:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:24:38.306 09:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:38.306 09:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:24:38.306 09:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:38.306 09:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:38.306 rmmod nvme_tcp 00:24:38.306 rmmod nvme_fabrics 00:24:38.306 rmmod nvme_keyring 00:24:38.306 09:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:38.306 09:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:24:38.306 09:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:24:38.306 09:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 81972 ']' 00:24:38.306 09:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 81972 00:24:38.306 09:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 81972 ']' 00:24:38.306 09:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 81972 00:24:38.306 09:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:24:38.306 09:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:38.306 09:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81972 00:24:38.306 killing process with pid 81972 00:24:38.306 09:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:38.306 09:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:38.306 09:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81972' 00:24:38.306 09:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 81972 00:24:38.306 09:06:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 81972 00:24:39.683 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:39.683 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:39.683 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:39.683 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:39.683 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:39.683 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:39.683 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:39.683 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:39.683 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:39.683 00:24:39.683 real 0m10.977s 00:24:39.683 user 0m20.847s 00:24:39.683 sys 0m2.184s 00:24:39.683 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:39.683 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:39.683 ************************************ 00:24:39.683 END TEST nvmf_host_discovery 00:24:39.683 ************************************ 00:24:39.683 09:06:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:39.683 09:06:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:39.683 09:06:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:39.683 09:06:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.683 ************************************ 00:24:39.683 START TEST nvmf_host_multipath_status 00:24:39.683 ************************************ 00:24:39.683 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:39.683 * Looking for test storage... 00:24:39.683 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:39.683 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:39.683 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:24:39.683 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:39.683 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:39.683 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:39.683 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:39.683 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:39.683 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:39.683 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:39.683 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:39.683 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:39.683 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:39.683 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:24:39.683 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:24:39.683 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:39.683 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:39.683 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:39.683 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:39.683 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:39.683 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:39.683 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:39.683 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:39.683 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.683 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.684 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.684 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:24:39.684 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.684 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:24:39.684 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:39.684 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:39.684 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:39.684 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:39.684 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:39.684 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:39.684 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:39.684 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:39.684 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:39.684 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:39.684 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:39.684 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:24:39.684 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:39.684 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:24:39.684 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:24:39.684 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:39.684 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:39.684 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:39.684 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:39.684 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:39.684 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:39.684 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:39.684 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:39.684 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:24:39.684 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:24:39.684 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:24:39.684 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:24:39.684 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:24:39.684 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # nvmf_veth_init 00:24:39.684 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:39.684 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:39.684 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:39.684 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:24:39.684 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:39.684 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:39.684 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:39.684 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:39.684 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:39.684 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:39.684 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:39.684 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:39.684 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:24:39.684 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:24:39.684 Cannot find device "nvmf_tgt_br" 00:24:39.684 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # true 00:24:39.684 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:24:39.684 Cannot find device "nvmf_tgt_br2" 00:24:39.684 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # true 00:24:39.684 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:24:39.684 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:24:39.684 Cannot find device "nvmf_tgt_br" 00:24:39.684 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # true 00:24:39.684 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:24:39.684 Cannot find device "nvmf_tgt_br2" 00:24:39.684 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # true 00:24:39.684 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:24:39.944 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:24:39.944 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:39.944 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:39.944 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:24:39.944 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:39.944 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:39.944 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:24:39.944 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:24:39.944 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:39.944 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:39.944 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:39.944 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:39.944 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:39.944 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:39.944 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:39.944 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:39.944 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:24:39.944 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:24:39.944 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:24:39.944 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:24:39.944 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:39.944 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:39.944 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:39.944 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:24:39.944 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:24:39.944 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:39.944 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:39.944 09:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:39.944 09:06:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:39.944 09:06:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:39.944 09:06:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:39.944 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:39.944 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.103 ms 00:24:39.944 00:24:39.944 --- 10.0.0.2 ping statistics --- 00:24:39.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:39.945 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:24:39.945 09:06:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:39.945 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:39.945 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:24:39.945 00:24:39.945 --- 10.0.0.3 ping statistics --- 00:24:39.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:39.945 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:24:39.945 09:06:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:39.945 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:39.945 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:24:39.945 00:24:39.945 --- 10.0.0.1 ping statistics --- 00:24:39.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:39.945 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:24:39.945 09:06:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:39.945 09:06:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@433 -- # return 0 00:24:39.945 09:06:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:39.945 09:06:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:39.945 09:06:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:39.945 09:06:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:39.945 09:06:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:39.945 09:06:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:39.945 09:06:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:40.204 09:06:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:24:40.204 09:06:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:40.204 09:06:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:40.204 09:06:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:40.204 09:06:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:40.204 09:06:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=82457 00:24:40.204 09:06:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 82457 00:24:40.204 09:06:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 82457 ']' 00:24:40.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:40.204 09:06:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:40.204 09:06:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:40.204 09:06:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:40.204 09:06:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:40.204 09:06:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:40.204 [2024-07-25 09:06:47.186802] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:24:40.204 [2024-07-25 09:06:47.187781] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:40.463 [2024-07-25 09:06:47.370157] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:40.722 [2024-07-25 09:06:47.648302] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:40.722 [2024-07-25 09:06:47.648386] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:40.722 [2024-07-25 09:06:47.648406] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:40.722 [2024-07-25 09:06:47.648422] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:40.722 [2024-07-25 09:06:47.648435] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:40.722 [2024-07-25 09:06:47.648567] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:40.722 [2024-07-25 09:06:47.648755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:40.981 [2024-07-25 09:06:47.855155] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:24:41.240 09:06:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:41.240 09:06:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:24:41.240 09:06:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:41.240 09:06:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:41.240 09:06:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:41.240 09:06:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:41.240 09:06:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=82457 00:24:41.240 09:06:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:41.499 [2024-07-25 09:06:48.415314] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:41.499 09:06:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:41.758 Malloc0 00:24:41.758 09:06:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:24:42.016 09:06:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:42.275 09:06:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:42.533 [2024-07-25 09:06:49.418654] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:42.533 09:06:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:42.792 [2024-07-25 09:06:49.738875] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:42.792 09:06:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:24:42.792 09:06:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=82516 00:24:42.792 09:06:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:42.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:42.792 09:06:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 82516 /var/tmp/bdevperf.sock 00:24:42.792 09:06:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 82516 ']' 00:24:42.792 09:06:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:42.792 09:06:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:42.792 09:06:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:42.792 09:06:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:42.792 09:06:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:43.728 09:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:43.728 09:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:24:43.728 09:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:43.987 09:06:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:24:44.555 Nvme0n1 00:24:44.555 09:06:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:44.814 Nvme0n1 00:24:44.814 09:06:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:24:44.814 09:06:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:24:46.717 09:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:24:46.717 09:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:46.975 09:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:47.234 09:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:24:48.170 09:06:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:24:48.170 09:06:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:48.171 09:06:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:48.171 09:06:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:48.738 09:06:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:48.738 09:06:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:48.738 09:06:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:48.738 09:06:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:48.738 09:06:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:48.738 09:06:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:48.738 09:06:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:48.738 09:06:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:48.997 09:06:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:48.997 09:06:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:48.997 09:06:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:48.997 09:06:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:49.256 09:06:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:49.256 09:06:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:49.256 09:06:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:49.256 09:06:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:49.515 09:06:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:49.515 09:06:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:49.515 09:06:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:49.515 09:06:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:49.773 09:06:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:49.773 09:06:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:24:49.773 09:06:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:50.033 09:06:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:50.296 09:06:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:24:51.232 09:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:24:51.232 09:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:51.232 09:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:51.232 09:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:51.490 09:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:51.490 09:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:51.490 09:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:51.490 09:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:51.747 09:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:51.747 09:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:51.747 09:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:51.747 09:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:52.312 09:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:52.312 09:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:52.312 09:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:52.312 09:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:52.312 09:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:52.312 09:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:52.312 09:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:52.312 09:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:52.570 09:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:52.570 09:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:52.570 09:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:52.570 09:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:52.829 09:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:52.829 09:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:24:52.830 09:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:53.106 09:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:53.383 09:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:24:54.319 09:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:24:54.319 09:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:54.319 09:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:54.319 09:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:54.886 09:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:54.886 09:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:54.886 09:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:54.886 09:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:54.886 09:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:54.886 09:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:54.886 09:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:54.886 09:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:55.144 09:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:55.144 09:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:55.144 09:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:55.144 09:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:55.403 09:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:55.403 09:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:55.403 09:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:55.403 09:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:55.660 09:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:55.660 09:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:55.660 09:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:55.660 09:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:55.920 09:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:55.920 09:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:24:55.920 09:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:56.215 09:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:56.492 09:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:24:57.425 09:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:24:57.425 09:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:57.425 09:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:57.425 09:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:57.683 09:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:57.684 09:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:57.684 09:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:57.684 09:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:57.942 09:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:57.942 09:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:57.942 09:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:57.942 09:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:58.201 09:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:58.201 09:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:58.201 09:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:58.201 09:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:58.460 09:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:58.460 09:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:58.460 09:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:58.460 09:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:58.719 09:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:58.719 09:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:58.719 09:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:58.719 09:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:58.977 09:07:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:58.977 09:07:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:24:58.977 09:07:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:59.306 09:07:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:59.564 09:07:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:25:00.501 09:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:25:00.501 09:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:00.501 09:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:00.501 09:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:00.759 09:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:00.759 09:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:00.759 09:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:00.759 09:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:01.017 09:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:01.017 09:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:01.017 09:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:01.017 09:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:01.275 09:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:01.275 09:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:01.275 09:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:01.275 09:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:01.533 09:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:01.533 09:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:01.533 09:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:01.533 09:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:01.791 09:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:01.791 09:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:01.791 09:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:01.791 09:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:02.049 09:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:02.049 09:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:25:02.049 09:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:02.307 09:07:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:02.565 09:07:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:25:03.497 09:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:25:03.497 09:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:03.497 09:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:03.497 09:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:03.755 09:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:03.755 09:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:03.755 09:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:03.755 09:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:04.013 09:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:04.013 09:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:04.013 09:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:04.013 09:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:04.272 09:07:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:04.272 09:07:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:04.272 09:07:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:04.272 09:07:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:04.531 09:07:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:04.531 09:07:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:04.531 09:07:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:04.531 09:07:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:04.789 09:07:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:04.789 09:07:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:04.789 09:07:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:04.789 09:07:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:05.063 09:07:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:05.063 09:07:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:25:05.320 09:07:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:25:05.320 09:07:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:05.578 09:07:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:05.836 09:07:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:25:06.769 09:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:25:06.769 09:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:06.769 09:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:06.769 09:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:07.026 09:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:07.026 09:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:07.026 09:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:07.026 09:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:07.284 09:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:07.284 09:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:07.284 09:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:07.284 09:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:07.542 09:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:07.542 09:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:07.542 09:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:07.542 09:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:07.800 09:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:07.800 09:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:07.800 09:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:07.800 09:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:08.057 09:07:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:08.057 09:07:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:08.057 09:07:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:08.057 09:07:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:08.333 09:07:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:08.333 09:07:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:25:08.333 09:07:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:08.613 09:07:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:08.871 09:07:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:25:09.805 09:07:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:25:09.805 09:07:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:09.805 09:07:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:09.805 09:07:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:10.062 09:07:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:10.062 09:07:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:10.062 09:07:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:10.062 09:07:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:10.320 09:07:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:10.320 09:07:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:10.320 09:07:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:10.320 09:07:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:10.577 09:07:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:10.577 09:07:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:10.577 09:07:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:10.577 09:07:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:10.835 09:07:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:10.835 09:07:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:10.835 09:07:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:10.835 09:07:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:11.093 09:07:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:11.093 09:07:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:11.093 09:07:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:11.093 09:07:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:11.351 09:07:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:11.351 09:07:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:25:11.351 09:07:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:11.609 09:07:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:11.867 09:07:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:25:12.800 09:07:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:25:13.058 09:07:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:13.058 09:07:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:13.058 09:07:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:13.316 09:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:13.316 09:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:13.316 09:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:13.316 09:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:13.574 09:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:13.574 09:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:13.574 09:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:13.574 09:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:13.574 09:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:13.574 09:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:13.574 09:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:13.574 09:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:13.832 09:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:13.832 09:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:13.832 09:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:13.832 09:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:14.090 09:07:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:14.090 09:07:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:14.090 09:07:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:14.090 09:07:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:14.656 09:07:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:14.656 09:07:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:25:14.656 09:07:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:14.656 09:07:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:14.914 09:07:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:25:15.849 09:07:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:25:15.849 09:07:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:15.849 09:07:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:15.849 09:07:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:16.415 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:16.415 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:16.415 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:16.415 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:16.738 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:16.738 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:16.738 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:16.738 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:16.738 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:16.738 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:16.738 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:16.738 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:16.997 09:07:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:16.997 09:07:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:16.997 09:07:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:16.997 09:07:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:17.255 09:07:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:17.255 09:07:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:17.255 09:07:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:17.256 09:07:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:17.514 09:07:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:17.514 09:07:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 82516 00:25:17.514 09:07:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 82516 ']' 00:25:17.514 09:07:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 82516 00:25:17.514 09:07:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:25:17.514 09:07:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:17.514 09:07:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82516 00:25:17.514 killing process with pid 82516 00:25:17.514 09:07:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:25:17.514 09:07:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:25:17.514 09:07:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82516' 00:25:17.514 09:07:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 82516 00:25:17.514 09:07:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 82516 00:25:18.449 Connection closed with partial response: 00:25:18.449 00:25:18.449 00:25:18.711 09:07:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 82516 00:25:18.711 09:07:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:25:18.711 [2024-07-25 09:06:49.845269] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:25:18.711 [2024-07-25 09:06:49.845444] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82516 ] 00:25:18.711 [2024-07-25 09:06:50.011672] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:18.711 [2024-07-25 09:06:50.250562] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:18.711 [2024-07-25 09:06:50.461641] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:25:18.711 Running I/O for 90 seconds... 00:25:18.711 [2024-07-25 09:07:06.247284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:50984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.711 [2024-07-25 09:07:06.247386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:18.711 [2024-07-25 09:07:06.247478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:50992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.711 [2024-07-25 09:07:06.247509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:18.711 [2024-07-25 09:07:06.247546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:51000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.711 [2024-07-25 09:07:06.247569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:18.711 [2024-07-25 09:07:06.247603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:51008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.711 [2024-07-25 09:07:06.247625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:18.711 [2024-07-25 09:07:06.247657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:51016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.711 [2024-07-25 09:07:06.247679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:18.711 [2024-07-25 09:07:06.247712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:51024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.711 [2024-07-25 09:07:06.247734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:18.711 [2024-07-25 09:07:06.247766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:51032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.711 [2024-07-25 09:07:06.247789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:18.711 [2024-07-25 09:07:06.247837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:51040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.711 [2024-07-25 09:07:06.247863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:18.711 [2024-07-25 09:07:06.247904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:50536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.711 [2024-07-25 09:07:06.247945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:18.711 [2024-07-25 09:07:06.247980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:50544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.711 [2024-07-25 09:07:06.248003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:18.711 [2024-07-25 09:07:06.248036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:50552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.711 [2024-07-25 09:07:06.248076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:18.711 [2024-07-25 09:07:06.248116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:50560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.711 [2024-07-25 09:07:06.248139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:18.711 [2024-07-25 09:07:06.248171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:50568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.712 [2024-07-25 09:07:06.248193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:18.712 [2024-07-25 09:07:06.248224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:50576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.712 [2024-07-25 09:07:06.248246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:18.712 [2024-07-25 09:07:06.248278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:50584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.712 [2024-07-25 09:07:06.248300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:18.712 [2024-07-25 09:07:06.248335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:50592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.712 [2024-07-25 09:07:06.248357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:18.712 [2024-07-25 09:07:06.248410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:51048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.712 [2024-07-25 09:07:06.248438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:18.712 [2024-07-25 09:07:06.248473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:51056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.712 [2024-07-25 09:07:06.248496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:18.712 [2024-07-25 09:07:06.248527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:51064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.712 [2024-07-25 09:07:06.248549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:18.712 [2024-07-25 09:07:06.248581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:51072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.712 [2024-07-25 09:07:06.248603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.712 [2024-07-25 09:07:06.248634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:51080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.712 [2024-07-25 09:07:06.248656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:18.712 [2024-07-25 09:07:06.248688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:51088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.712 [2024-07-25 09:07:06.248709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:18.712 [2024-07-25 09:07:06.248741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:51096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.712 [2024-07-25 09:07:06.248779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:18.712 [2024-07-25 09:07:06.248828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:51104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.712 [2024-07-25 09:07:06.248855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:18.712 [2024-07-25 09:07:06.248896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:51112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.712 [2024-07-25 09:07:06.248918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:18.712 [2024-07-25 09:07:06.248951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:51120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.712 [2024-07-25 09:07:06.248973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:18.712 [2024-07-25 09:07:06.249005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:51128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.712 [2024-07-25 09:07:06.249027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:18.712 [2024-07-25 09:07:06.249059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:51136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.712 [2024-07-25 09:07:06.249081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:18.712 [2024-07-25 09:07:06.249121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:51144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.712 [2024-07-25 09:07:06.249143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:18.712 [2024-07-25 09:07:06.249174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:51152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.712 [2024-07-25 09:07:06.249196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:18.712 [2024-07-25 09:07:06.249228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:51160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.712 [2024-07-25 09:07:06.249250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:18.712 [2024-07-25 09:07:06.249283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:51168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.712 [2024-07-25 09:07:06.249305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:18.712 [2024-07-25 09:07:06.249337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:50600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.712 [2024-07-25 09:07:06.249359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:18.712 [2024-07-25 09:07:06.249392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:50608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.712 [2024-07-25 09:07:06.249434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:18.712 [2024-07-25 09:07:06.249467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:50616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.712 [2024-07-25 09:07:06.249490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:18.712 [2024-07-25 09:07:06.249535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:50624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.712 [2024-07-25 09:07:06.249559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:18.712 [2024-07-25 09:07:06.249591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:50632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.712 [2024-07-25 09:07:06.249613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:18.712 [2024-07-25 09:07:06.249645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:50640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.712 [2024-07-25 09:07:06.249667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:18.712 [2024-07-25 09:07:06.249700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:50648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.712 [2024-07-25 09:07:06.249722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:18.712 [2024-07-25 09:07:06.249754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:50656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.712 [2024-07-25 09:07:06.249777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:18.712 [2024-07-25 09:07:06.249809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:50664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.712 [2024-07-25 09:07:06.249849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:18.712 [2024-07-25 09:07:06.249883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:50672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.712 [2024-07-25 09:07:06.249907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:18.712 [2024-07-25 09:07:06.249939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:50680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.712 [2024-07-25 09:07:06.249961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:18.712 [2024-07-25 09:07:06.249997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:50688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.712 [2024-07-25 09:07:06.250019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:18.712 [2024-07-25 09:07:06.250050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:50696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.712 [2024-07-25 09:07:06.250074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:18.712 [2024-07-25 09:07:06.250113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:50704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.712 [2024-07-25 09:07:06.250135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:18.712 [2024-07-25 09:07:06.250168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:50712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.712 [2024-07-25 09:07:06.250190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:18.712 [2024-07-25 09:07:06.250232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:50720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.712 [2024-07-25 09:07:06.250258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:18.712 [2024-07-25 09:07:06.250290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:50728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.713 [2024-07-25 09:07:06.250312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:18.713 [2024-07-25 09:07:06.250344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:50736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.713 [2024-07-25 09:07:06.250367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:18.713 [2024-07-25 09:07:06.250399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:50744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.713 [2024-07-25 09:07:06.250421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:18.713 [2024-07-25 09:07:06.250453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:50752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.713 [2024-07-25 09:07:06.250475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.713 [2024-07-25 09:07:06.250506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:50760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.713 [2024-07-25 09:07:06.250529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:18.713 [2024-07-25 09:07:06.250561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:50768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.713 [2024-07-25 09:07:06.250583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:18.713 [2024-07-25 09:07:06.250616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:50776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.713 [2024-07-25 09:07:06.250638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:18.713 [2024-07-25 09:07:06.250670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:50784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.713 [2024-07-25 09:07:06.250693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:18.713 [2024-07-25 09:07:06.250757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:51176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.713 [2024-07-25 09:07:06.250785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:18.713 [2024-07-25 09:07:06.250836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:51184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.713 [2024-07-25 09:07:06.250862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:18.713 [2024-07-25 09:07:06.250895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:51192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.713 [2024-07-25 09:07:06.250923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:18.713 [2024-07-25 09:07:06.250955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:51200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.713 [2024-07-25 09:07:06.250995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:18.713 [2024-07-25 09:07:06.251030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:51208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.713 [2024-07-25 09:07:06.251053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:18.713 [2024-07-25 09:07:06.251085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:51216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.713 [2024-07-25 09:07:06.251108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:18.713 [2024-07-25 09:07:06.251140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:51224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.713 [2024-07-25 09:07:06.251162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:18.713 [2024-07-25 09:07:06.251194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:51232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.713 [2024-07-25 09:07:06.251217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:18.713 [2024-07-25 09:07:06.251248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:51240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.713 [2024-07-25 09:07:06.251270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:18.713 [2024-07-25 09:07:06.251301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:51248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.713 [2024-07-25 09:07:06.251324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:18.713 [2024-07-25 09:07:06.251355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:51256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.713 [2024-07-25 09:07:06.251377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:18.713 [2024-07-25 09:07:06.251413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:51264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.713 [2024-07-25 09:07:06.251435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:18.713 [2024-07-25 09:07:06.251466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:51272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.713 [2024-07-25 09:07:06.251488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:18.713 [2024-07-25 09:07:06.251520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:51280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.713 [2024-07-25 09:07:06.251542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:18.713 [2024-07-25 09:07:06.251574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:50792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.713 [2024-07-25 09:07:06.251596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:18.713 [2024-07-25 09:07:06.251627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:50800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.713 [2024-07-25 09:07:06.251658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:18.713 [2024-07-25 09:07:06.251692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:50808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.713 [2024-07-25 09:07:06.251714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:18.713 [2024-07-25 09:07:06.251746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:50816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.713 [2024-07-25 09:07:06.251768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:18.713 [2024-07-25 09:07:06.251799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:50824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.713 [2024-07-25 09:07:06.251838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:18.713 [2024-07-25 09:07:06.251873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:50832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.713 [2024-07-25 09:07:06.251896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:18.713 [2024-07-25 09:07:06.251940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:50840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.713 [2024-07-25 09:07:06.251974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:18.713 [2024-07-25 09:07:06.252006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:50848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.713 [2024-07-25 09:07:06.252028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:18.713 [2024-07-25 09:07:06.252072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:51288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.713 [2024-07-25 09:07:06.252094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:18.713 [2024-07-25 09:07:06.252131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:51296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.713 [2024-07-25 09:07:06.252154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:18.713 [2024-07-25 09:07:06.252186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:51304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.713 [2024-07-25 09:07:06.252208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:18.713 [2024-07-25 09:07:06.252239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:51312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.713 [2024-07-25 09:07:06.252261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:18.713 [2024-07-25 09:07:06.252292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:51320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.713 [2024-07-25 09:07:06.252314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.713 [2024-07-25 09:07:06.252346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:51328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.713 [2024-07-25 09:07:06.252377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.713 [2024-07-25 09:07:06.252411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:51336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.714 [2024-07-25 09:07:06.252434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:18.714 [2024-07-25 09:07:06.252467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:51344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.714 [2024-07-25 09:07:06.252490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:18.714 [2024-07-25 09:07:06.252521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:51352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.714 [2024-07-25 09:07:06.252544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:18.714 [2024-07-25 09:07:06.252575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:51360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.714 [2024-07-25 09:07:06.252596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:18.714 [2024-07-25 09:07:06.252627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:51368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.714 [2024-07-25 09:07:06.252649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:18.714 [2024-07-25 09:07:06.252680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:51376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.714 [2024-07-25 09:07:06.252702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:18.714 [2024-07-25 09:07:06.252734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:51384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.714 [2024-07-25 09:07:06.252756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:18.714 [2024-07-25 09:07:06.252788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:51392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.714 [2024-07-25 09:07:06.252810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:18.714 [2024-07-25 09:07:06.252859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:51400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.714 [2024-07-25 09:07:06.252882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:18.714 [2024-07-25 09:07:06.252914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:51408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.714 [2024-07-25 09:07:06.252936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:18.714 [2024-07-25 09:07:06.252973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:51416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.714 [2024-07-25 09:07:06.252996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:18.714 [2024-07-25 09:07:06.253033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:51424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.714 [2024-07-25 09:07:06.253056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:18.714 [2024-07-25 09:07:06.253103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:51432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.714 [2024-07-25 09:07:06.253128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:18.714 [2024-07-25 09:07:06.253160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:50856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.714 [2024-07-25 09:07:06.253198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:18.714 [2024-07-25 09:07:06.253230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:50864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.714 [2024-07-25 09:07:06.253253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:18.714 [2024-07-25 09:07:06.253284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:50872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.714 [2024-07-25 09:07:06.253306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:18.714 [2024-07-25 09:07:06.253338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:50880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.714 [2024-07-25 09:07:06.253360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:18.714 [2024-07-25 09:07:06.253391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:50888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.714 [2024-07-25 09:07:06.253413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:18.714 [2024-07-25 09:07:06.253444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:50896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.714 [2024-07-25 09:07:06.253466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:18.714 [2024-07-25 09:07:06.253498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:50904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.714 [2024-07-25 09:07:06.253520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:18.714 [2024-07-25 09:07:06.254546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:50912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.714 [2024-07-25 09:07:06.254587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:18.714 [2024-07-25 09:07:06.254639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:51440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.714 [2024-07-25 09:07:06.254664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:18.714 [2024-07-25 09:07:06.254706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:51448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.714 [2024-07-25 09:07:06.254730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:18.714 [2024-07-25 09:07:06.254770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:51456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.714 [2024-07-25 09:07:06.254793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:18.714 [2024-07-25 09:07:06.254865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:51464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.714 [2024-07-25 09:07:06.254892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:18.714 [2024-07-25 09:07:06.254932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:51472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.714 [2024-07-25 09:07:06.254955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:18.714 [2024-07-25 09:07:06.254996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:51480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.714 [2024-07-25 09:07:06.255019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:18.714 [2024-07-25 09:07:06.255066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:51488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.714 [2024-07-25 09:07:06.255090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:18.714 [2024-07-25 09:07:06.255236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:51496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.714 [2024-07-25 09:07:06.255268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:18.714 [2024-07-25 09:07:06.255314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:51504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.714 [2024-07-25 09:07:06.255338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:18.714 [2024-07-25 09:07:06.255379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:51512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.714 [2024-07-25 09:07:06.255402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:18.714 [2024-07-25 09:07:06.255442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:51520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.714 [2024-07-25 09:07:06.255465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.714 [2024-07-25 09:07:06.255505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:51528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.714 [2024-07-25 09:07:06.255527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:18.714 [2024-07-25 09:07:06.255568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:51536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.714 [2024-07-25 09:07:06.255590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:18.714 [2024-07-25 09:07:06.255630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:51544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.714 [2024-07-25 09:07:06.255652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:18.714 [2024-07-25 09:07:06.255692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:51552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.714 [2024-07-25 09:07:06.255715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:18.714 [2024-07-25 09:07:06.255755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:50920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.715 [2024-07-25 09:07:06.255789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:18.715 [2024-07-25 09:07:06.255848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:50928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.715 [2024-07-25 09:07:06.255875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:18.715 [2024-07-25 09:07:06.255931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:50936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.715 [2024-07-25 09:07:06.255957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:18.715 [2024-07-25 09:07:06.255999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:50944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.715 [2024-07-25 09:07:06.256023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:18.715 [2024-07-25 09:07:06.256064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:50952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.715 [2024-07-25 09:07:06.256087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:18.715 [2024-07-25 09:07:06.256128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:50960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.715 [2024-07-25 09:07:06.256151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:18.715 [2024-07-25 09:07:06.256192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:50968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.715 [2024-07-25 09:07:06.256215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:18.715 [2024-07-25 09:07:06.256258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:50976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.715 [2024-07-25 09:07:06.256282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:18.715 [2024-07-25 09:07:21.928203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:15672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.715 [2024-07-25 09:07:21.928314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:18.715 [2024-07-25 09:07:21.928403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:15688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.715 [2024-07-25 09:07:21.928432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:18.715 [2024-07-25 09:07:21.928465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:15704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.715 [2024-07-25 09:07:21.928487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:18.715 [2024-07-25 09:07:21.928518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:15720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.715 [2024-07-25 09:07:21.928540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:18.715 [2024-07-25 09:07:21.928571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:15736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.715 [2024-07-25 09:07:21.928628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:18.715 [2024-07-25 09:07:21.928663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:15752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.715 [2024-07-25 09:07:21.928685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:18.715 [2024-07-25 09:07:21.928715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:15768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.715 [2024-07-25 09:07:21.928736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:18.715 [2024-07-25 09:07:21.928767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:15784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.715 [2024-07-25 09:07:21.928788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:18.715 [2024-07-25 09:07:21.928835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:15800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.715 [2024-07-25 09:07:21.928861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:18.715 [2024-07-25 09:07:21.928892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:15816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.715 [2024-07-25 09:07:21.928913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:18.715 [2024-07-25 09:07:21.928945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:15832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.715 [2024-07-25 09:07:21.928966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:18.715 [2024-07-25 09:07:21.928996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:15848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.715 [2024-07-25 09:07:21.929016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:18.715 [2024-07-25 09:07:21.929046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:15040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.715 [2024-07-25 09:07:21.929067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.715 [2024-07-25 09:07:21.929097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:15064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.715 [2024-07-25 09:07:21.929118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:18.715 [2024-07-25 09:07:21.929147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.715 [2024-07-25 09:07:21.929168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:18.715 [2024-07-25 09:07:21.929198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.715 [2024-07-25 09:07:21.929218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:18.715 [2024-07-25 09:07:21.929250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:15584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.715 [2024-07-25 09:07:21.929271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:18.715 [2024-07-25 09:07:21.929334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.715 [2024-07-25 09:07:21.929358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:18.715 [2024-07-25 09:07:21.929389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:15648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.715 [2024-07-25 09:07:21.929427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:18.715 [2024-07-25 09:07:21.929459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:15872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.715 [2024-07-25 09:07:21.929481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:18.715 [2024-07-25 09:07:21.929513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:15888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.715 [2024-07-25 09:07:21.929534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:18.715 [2024-07-25 09:07:21.929565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:15904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.715 [2024-07-25 09:07:21.929586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:18.715 [2024-07-25 09:07:21.929618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:15920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.715 [2024-07-25 09:07:21.929640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:18.715 [2024-07-25 09:07:21.929696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:15936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.715 [2024-07-25 09:07:21.929724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:18.715 [2024-07-25 09:07:21.929757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:15952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.715 [2024-07-25 09:07:21.929779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:18.715 [2024-07-25 09:07:21.929811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:15968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.715 [2024-07-25 09:07:21.929849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:18.715 [2024-07-25 09:07:21.929889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.715 [2024-07-25 09:07:21.929911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:18.715 [2024-07-25 09:07:21.929942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.715 [2024-07-25 09:07:21.929964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:18.715 [2024-07-25 09:07:21.929994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:16016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.716 [2024-07-25 09:07:21.930015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:18.716 [2024-07-25 09:07:21.930059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.716 [2024-07-25 09:07:21.930082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:18.716 [2024-07-25 09:07:21.930113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:15216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.716 [2024-07-25 09:07:21.930135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:18.716 [2024-07-25 09:07:21.930166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.716 [2024-07-25 09:07:21.930187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:18.716 [2024-07-25 09:07:21.930233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:15280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.716 [2024-07-25 09:07:21.930254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:18.716 [2024-07-25 09:07:21.930284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:16032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.716 [2024-07-25 09:07:21.930305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:18.716 [2024-07-25 09:07:21.930335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:16048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.716 [2024-07-25 09:07:21.930356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:18.716 [2024-07-25 09:07:21.930392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.716 [2024-07-25 09:07:21.930416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:18.716 [2024-07-25 09:07:21.930447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.716 [2024-07-25 09:07:21.930467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:18.716 [2024-07-25 09:07:21.930497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:15296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.716 [2024-07-25 09:07:21.930517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:18.716 [2024-07-25 09:07:21.930548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:15328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.716 [2024-07-25 09:07:21.930568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:18.716 [2024-07-25 09:07:21.930617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.716 [2024-07-25 09:07:21.930638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:18.716 [2024-07-25 09:07:21.930669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:15392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.716 [2024-07-25 09:07:21.930691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:18.716 [2024-07-25 09:07:21.930722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.716 [2024-07-25 09:07:21.930754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:18.716 [2024-07-25 09:07:21.930787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:15696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.716 [2024-07-25 09:07:21.930809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:18.716 [2024-07-25 09:07:21.930851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:15728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.716 [2024-07-25 09:07:21.930877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:18.716 [2024-07-25 09:07:21.930909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:15760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.716 [2024-07-25 09:07:21.930931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.716 [2024-07-25 09:07:21.930963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:15792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.716 [2024-07-25 09:07:21.930985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:18.716 [2024-07-25 09:07:21.931017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.716 [2024-07-25 09:07:21.931039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:18.716 [2024-07-25 09:07:21.931070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:15856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.716 [2024-07-25 09:07:21.931092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:18.716 [2024-07-25 09:07:21.931124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:15880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.716 [2024-07-25 09:07:21.931145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:18.716 [2024-07-25 09:07:21.931178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:15912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.716 [2024-07-25 09:07:21.931199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:18.716 [2024-07-25 09:07:21.931230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:15944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.716 [2024-07-25 09:07:21.931252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:18.716 [2024-07-25 09:07:21.931283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:16096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.716 [2024-07-25 09:07:21.931305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:18.716 [2024-07-25 09:07:21.931336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:16112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.716 [2024-07-25 09:07:21.931357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:18.716 [2024-07-25 09:07:21.931388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:16128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.716 [2024-07-25 09:07:21.931419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:18.716 [2024-07-25 09:07:21.931453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:15448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.716 [2024-07-25 09:07:21.931475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:18.716 [2024-07-25 09:07:21.931507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.716 [2024-07-25 09:07:21.931528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:18.716 [2024-07-25 09:07:21.931559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:15504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.716 [2024-07-25 09:07:21.931581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:18.716 [2024-07-25 09:07:21.931613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:15536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.716 [2024-07-25 09:07:21.931635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:18.716 [2024-07-25 09:07:21.931666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:16144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.716 [2024-07-25 09:07:21.931687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:18.717 [2024-07-25 09:07:21.931719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.717 [2024-07-25 09:07:21.931741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:18.717 [2024-07-25 09:07:21.931772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.717 [2024-07-25 09:07:21.931793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:18.717 [2024-07-25 09:07:21.931838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:15992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.717 [2024-07-25 09:07:21.931863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:18.717 [2024-07-25 09:07:21.931896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:16024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.717 [2024-07-25 09:07:21.931930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:18.717 [2024-07-25 09:07:21.931965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:16056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.717 [2024-07-25 09:07:21.931987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:18.717 [2024-07-25 09:07:21.932019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:16088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.717 [2024-07-25 09:07:21.932041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:18.717 [2024-07-25 09:07:21.932105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.717 [2024-07-25 09:07:21.932133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:18.717 [2024-07-25 09:07:21.932180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.717 [2024-07-25 09:07:21.932204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:18.717 [2024-07-25 09:07:21.932235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:16224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.717 [2024-07-25 09:07:21.932257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:18.717 [2024-07-25 09:07:21.932288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:16240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.717 [2024-07-25 09:07:21.932310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:18.717 [2024-07-25 09:07:21.932341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:16256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.717 [2024-07-25 09:07:21.932369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:18.717 [2024-07-25 09:07:21.932399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.717 [2024-07-25 09:07:21.932420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:18.717 [2024-07-25 09:07:21.932451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:16288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.717 [2024-07-25 09:07:21.932472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:18.717 [2024-07-25 09:07:21.932503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:16304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.717 [2024-07-25 09:07:21.932525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:18.717 [2024-07-25 09:07:21.932563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:16320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.717 [2024-07-25 09:07:21.932589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:18.717 [2024-07-25 09:07:21.932624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:16120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.717 [2024-07-25 09:07:21.932646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:18.717 [2024-07-25 09:07:21.932677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:16152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.717 [2024-07-25 09:07:21.932698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:18.717 [2024-07-25 09:07:21.932729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:16328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.717 [2024-07-25 09:07:21.932751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.717 [2024-07-25 09:07:21.932782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:16344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.717 [2024-07-25 09:07:21.932803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:18.717 [2024-07-25 09:07:21.932867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.717 [2024-07-25 09:07:21.932891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:18.717 [2024-07-25 09:07:21.932923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:16376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.717 [2024-07-25 09:07:21.932954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:18.717 [2024-07-25 09:07:21.932985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:15552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.717 [2024-07-25 09:07:21.933007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:18.717 [2024-07-25 09:07:21.933057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:16392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.717 [2024-07-25 09:07:21.933079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:18.717 [2024-07-25 09:07:21.933111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.717 [2024-07-25 09:07:21.933132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:18.717 [2024-07-25 09:07:21.933164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:16424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.717 [2024-07-25 09:07:21.933186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:18.717 [2024-07-25 09:07:21.933217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:16440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.717 [2024-07-25 09:07:21.933245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:18.717 [2024-07-25 09:07:21.933277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:15576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.717 [2024-07-25 09:07:21.933298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:18.717 [2024-07-25 09:07:21.933329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:15608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.717 [2024-07-25 09:07:21.933350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:18.717 [2024-07-25 09:07:21.933381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:15640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.717 [2024-07-25 09:07:21.933404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:18.717 [2024-07-25 09:07:21.935603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:16200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.717 [2024-07-25 09:07:21.935643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:18.717 [2024-07-25 09:07:21.935693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:16232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.717 [2024-07-25 09:07:21.935718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:18.717 [2024-07-25 09:07:21.935760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:16264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.717 [2024-07-25 09:07:21.935797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:18.717 [2024-07-25 09:07:21.935848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:16296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.717 [2024-07-25 09:07:21.935873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:18.717 [2024-07-25 09:07:21.935904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:15656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.717 [2024-07-25 09:07:21.935939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:18.717 [2024-07-25 09:07:21.935972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:16464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.717 [2024-07-25 09:07:21.935994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:18.717 [2024-07-25 09:07:21.936025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:16480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.717 [2024-07-25 09:07:21.936047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:18.717 [2024-07-25 09:07:21.936078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:16496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.718 [2024-07-25 09:07:21.936100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:18.718 [2024-07-25 09:07:21.936132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:16512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.718 [2024-07-25 09:07:21.936154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:18.718 [2024-07-25 09:07:21.936184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:16528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.718 [2024-07-25 09:07:21.936206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:18.718 [2024-07-25 09:07:21.936237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:16544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.718 [2024-07-25 09:07:21.936258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:18.718 [2024-07-25 09:07:21.936291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:16560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.718 [2024-07-25 09:07:21.936313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:18.718 [2024-07-25 09:07:21.936375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.718 [2024-07-25 09:07:21.936403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:18.718 Received shutdown signal, test time was about 32.736536 seconds 00:25:18.718 00:25:18.718 Latency(us) 00:25:18.718 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:18.718 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:18.718 Verification LBA range: start 0x0 length 0x4000 00:25:18.718 Nvme0n1 : 32.74 6610.15 25.82 0.00 0.00 19326.19 480.35 4026531.84 00:25:18.718 =================================================================================================================== 00:25:18.718 Total : 6610.15 25.82 0.00 0.00 19326.19 480.35 4026531.84 00:25:18.718 09:07:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:18.976 09:07:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:25:18.976 09:07:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:25:18.976 09:07:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:25:18.976 09:07:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:18.976 09:07:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:25:19.233 09:07:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:19.233 09:07:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:25:19.233 09:07:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:19.233 09:07:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:19.233 rmmod nvme_tcp 00:25:19.233 rmmod nvme_fabrics 00:25:19.233 rmmod nvme_keyring 00:25:19.233 09:07:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:19.233 09:07:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:25:19.233 09:07:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:25:19.233 09:07:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 82457 ']' 00:25:19.233 09:07:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 82457 00:25:19.234 09:07:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 82457 ']' 00:25:19.234 09:07:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 82457 00:25:19.234 09:07:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:25:19.234 09:07:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:19.234 09:07:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82457 00:25:19.234 09:07:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:19.234 09:07:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:19.234 killing process with pid 82457 00:25:19.234 09:07:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82457' 00:25:19.234 09:07:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 82457 00:25:19.234 09:07:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 82457 00:25:20.607 09:07:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:20.607 09:07:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:20.607 09:07:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:20.607 09:07:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:20.607 09:07:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:20.607 09:07:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:20.607 09:07:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:20.607 09:07:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:20.607 09:07:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:25:20.607 00:25:20.607 real 0m40.996s 00:25:20.607 user 2m10.176s 00:25:20.607 sys 0m11.064s 00:25:20.607 09:07:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:20.607 09:07:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:20.607 ************************************ 00:25:20.607 END TEST nvmf_host_multipath_status 00:25:20.607 ************************************ 00:25:20.607 09:07:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:20.607 09:07:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:20.607 09:07:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:20.607 09:07:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.607 ************************************ 00:25:20.607 START TEST nvmf_discovery_remove_ifc 00:25:20.607 ************************************ 00:25:20.607 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:20.868 * Looking for test storage... 00:25:20.868 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:20.868 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:20.868 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:25:20.868 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:20.868 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:20.868 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:20.868 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:20.868 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:20.868 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:20.868 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:20.868 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:20.868 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:20.868 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:20.868 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:25:20.868 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:25:20.868 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:20.868 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:20.868 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:20.868 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:20.868 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:20.868 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:20.868 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:20.868 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:20.868 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.868 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.868 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.868 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:25:20.868 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.868 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:25:20.868 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:20.868 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:20.868 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:20.868 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:20.868 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:20.868 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:20.868 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:20.868 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:20.868 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:25:20.868 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:25:20.868 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:25:20.868 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:25:20.868 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:25:20.868 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:25:20.868 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:25:20.868 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:20.868 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:20.868 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:20.868 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:20.868 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:20.868 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:20.868 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:20.868 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:20.868 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:25:20.868 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:25:20.868 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:25:20.868 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:25:20.868 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:25:20.868 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:25:20.868 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:20.868 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:20.868 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:20.868 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:25:20.868 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:20.868 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:20.868 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:20.868 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:20.868 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:20.868 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:20.868 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:20.868 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:20.868 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:25:20.868 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:25:20.868 Cannot find device "nvmf_tgt_br" 00:25:20.868 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # true 00:25:20.868 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:25:20.868 Cannot find device "nvmf_tgt_br2" 00:25:20.868 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # true 00:25:20.868 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:25:20.868 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:25:20.868 Cannot find device "nvmf_tgt_br" 00:25:20.868 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # true 00:25:20.868 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:25:20.868 Cannot find device "nvmf_tgt_br2" 00:25:20.868 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # true 00:25:20.868 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:25:20.868 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:25:20.868 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:20.868 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:20.868 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:25:20.868 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:20.868 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:20.868 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:25:20.868 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:25:20.868 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:20.869 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:20.869 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:20.869 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:20.869 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:20.869 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:20.869 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:21.141 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:21.142 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:25:21.142 09:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:25:21.142 09:07:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:25:21.142 09:07:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:25:21.142 09:07:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:21.142 09:07:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:21.142 09:07:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:21.142 09:07:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:25:21.142 09:07:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:25:21.142 09:07:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:25:21.142 09:07:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:21.142 09:07:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:21.142 09:07:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:21.142 09:07:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:21.142 09:07:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:25:21.142 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:21.142 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:25:21.142 00:25:21.142 --- 10.0.0.2 ping statistics --- 00:25:21.142 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:21.142 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:25:21.142 09:07:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:25:21.142 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:21.142 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.036 ms 00:25:21.142 00:25:21.142 --- 10.0.0.3 ping statistics --- 00:25:21.142 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:21.142 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:25:21.142 09:07:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:21.142 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:21.142 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:25:21.142 00:25:21.142 --- 10.0.0.1 ping statistics --- 00:25:21.142 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:21.142 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:25:21.142 09:07:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:21.142 09:07:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@433 -- # return 0 00:25:21.142 09:07:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:21.142 09:07:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:21.142 09:07:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:21.142 09:07:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:21.142 09:07:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:21.142 09:07:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:21.142 09:07:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:21.142 09:07:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:25:21.142 09:07:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:21.142 09:07:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:21.142 09:07:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:21.142 09:07:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=83303 00:25:21.142 09:07:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 83303 00:25:21.142 09:07:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:21.142 09:07:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 83303 ']' 00:25:21.142 09:07:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:21.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:21.142 09:07:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:21.142 09:07:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:21.142 09:07:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:21.142 09:07:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:21.142 [2024-07-25 09:07:28.247442] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:25:21.142 [2024-07-25 09:07:28.247608] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:21.408 [2024-07-25 09:07:28.432681] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:21.975 [2024-07-25 09:07:28.779293] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:21.975 [2024-07-25 09:07:28.779387] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:21.975 [2024-07-25 09:07:28.779404] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:21.975 [2024-07-25 09:07:28.779420] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:21.975 [2024-07-25 09:07:28.779432] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:21.975 [2024-07-25 09:07:28.779501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:21.975 [2024-07-25 09:07:29.033006] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:25:22.233 09:07:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:22.233 09:07:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:25:22.233 09:07:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:22.233 09:07:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:22.233 09:07:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:22.233 09:07:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:22.233 09:07:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:25:22.233 09:07:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.233 09:07:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:22.233 [2024-07-25 09:07:29.239767] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:22.233 [2024-07-25 09:07:29.248051] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:22.233 null0 00:25:22.233 [2024-07-25 09:07:29.280152] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:22.233 09:07:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.233 09:07:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=83335 00:25:22.233 09:07:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:25:22.233 09:07:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 83335 /tmp/host.sock 00:25:22.233 09:07:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 83335 ']' 00:25:22.233 09:07:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:25:22.233 09:07:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:22.233 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:22.233 09:07:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:22.233 09:07:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:22.233 09:07:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:22.491 [2024-07-25 09:07:29.406268] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:25:22.491 [2024-07-25 09:07:29.406454] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83335 ] 00:25:22.491 [2024-07-25 09:07:29.574931] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:22.769 [2024-07-25 09:07:29.840827] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:23.335 09:07:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:23.335 09:07:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:25:23.335 09:07:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:23.335 09:07:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:25:23.335 09:07:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.335 09:07:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:23.335 09:07:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.335 09:07:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:25:23.335 09:07:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.335 09:07:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:23.593 [2024-07-25 09:07:30.493608] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:25:23.594 09:07:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.594 09:07:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:25:23.594 09:07:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.594 09:07:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:24.528 [2024-07-25 09:07:31.624888] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:24.528 [2024-07-25 09:07:31.624984] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:24.528 [2024-07-25 09:07:31.625092] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:24.528 [2024-07-25 09:07:31.631072] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:24.786 [2024-07-25 09:07:31.696719] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:24.786 [2024-07-25 09:07:31.696844] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:24.786 [2024-07-25 09:07:31.696918] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:24.786 [2024-07-25 09:07:31.696948] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:24.786 [2024-07-25 09:07:31.696991] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:24.786 09:07:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.786 09:07:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:25:24.786 09:07:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:24.786 09:07:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:24.786 09:07:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.786 09:07:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:24.786 09:07:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:24.786 09:07:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:24.786 09:07:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:24.786 [2024-07-25 09:07:31.703386] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x61500002b000 was disconnected and freed. delete nvme_qpair. 00:25:24.786 09:07:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.786 09:07:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:25:24.786 09:07:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:25:24.786 09:07:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:25:24.786 09:07:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:25:24.786 09:07:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:24.786 09:07:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:24.786 09:07:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:24.786 09:07:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.786 09:07:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:24.786 09:07:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:24.786 09:07:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:24.786 09:07:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.786 09:07:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:24.786 09:07:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:25.720 09:07:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:25.720 09:07:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:25.720 09:07:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:25.720 09:07:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.720 09:07:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:25.720 09:07:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:25.977 09:07:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:25.977 09:07:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.977 09:07:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:25.977 09:07:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:26.910 09:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:26.910 09:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:26.910 09:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:26.910 09:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.910 09:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:26.910 09:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:26.910 09:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:26.910 09:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.910 09:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:26.910 09:07:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:27.843 09:07:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:27.843 09:07:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:27.843 09:07:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.843 09:07:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:27.843 09:07:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:27.843 09:07:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:27.843 09:07:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:28.101 09:07:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.101 09:07:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:28.101 09:07:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:29.033 09:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:29.033 09:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:29.033 09:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:29.033 09:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.033 09:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:29.033 09:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:29.033 09:07:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:29.033 09:07:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.033 09:07:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:29.033 09:07:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:30.003 09:07:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:30.003 09:07:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:30.003 09:07:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.003 09:07:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:30.003 09:07:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:30.003 09:07:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:30.003 09:07:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:30.003 09:07:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.003 09:07:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:30.003 09:07:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:30.261 [2024-07-25 09:07:37.124104] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:25:30.261 [2024-07-25 09:07:37.124235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.261 [2024-07-25 09:07:37.124276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.261 [2024-07-25 09:07:37.124300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.261 [2024-07-25 09:07:37.124330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.261 [2024-07-25 09:07:37.124345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.262 [2024-07-25 09:07:37.124359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.262 [2024-07-25 09:07:37.124374] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.262 [2024-07-25 09:07:37.124388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.262 [2024-07-25 09:07:37.124404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.262 [2024-07-25 09:07:37.124418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.262 [2024-07-25 09:07:37.124432] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ad80 is same with the state(5) to be set 00:25:30.262 [2024-07-25 09:07:37.134099] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:25:30.262 [2024-07-25 09:07:37.144133] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:31.196 09:07:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:31.196 09:07:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:31.196 09:07:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:31.196 09:07:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.196 09:07:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:31.196 09:07:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:31.196 09:07:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:31.196 [2024-07-25 09:07:38.190985] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:25:31.196 [2024-07-25 09:07:38.191168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ad80 with addr=10.0.0.2, port=4420 00:25:31.196 [2024-07-25 09:07:38.191220] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ad80 is same with the state(5) to be set 00:25:31.196 [2024-07-25 09:07:38.191350] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:25:31.196 [2024-07-25 09:07:38.192779] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:31.196 [2024-07-25 09:07:38.193005] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:31.196 [2024-07-25 09:07:38.193090] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:31.196 [2024-07-25 09:07:38.193148] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:31.196 [2024-07-25 09:07:38.193259] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:31.196 [2024-07-25 09:07:38.193327] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:31.196 09:07:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.196 09:07:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:31.196 09:07:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:32.132 [2024-07-25 09:07:39.193439] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:32.132 [2024-07-25 09:07:39.193537] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:32.132 [2024-07-25 09:07:39.193557] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:32.132 [2024-07-25 09:07:39.193575] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:25:32.132 [2024-07-25 09:07:39.193636] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:32.132 [2024-07-25 09:07:39.193729] bdev_nvme.c:6762:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:25:32.132 [2024-07-25 09:07:39.193829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.132 [2024-07-25 09:07:39.193868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.132 [2024-07-25 09:07:39.193892] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.132 [2024-07-25 09:07:39.193906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.132 [2024-07-25 09:07:39.193921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.132 [2024-07-25 09:07:39.193936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.132 [2024-07-25 09:07:39.193952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.132 [2024-07-25 09:07:39.193965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.132 [2024-07-25 09:07:39.193980] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.132 [2024-07-25 09:07:39.193994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.132 [2024-07-25 09:07:39.194008] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:25:32.132 [2024-07-25 09:07:39.194042] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:25:32.132 [2024-07-25 09:07:39.194653] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:25:32.132 [2024-07-25 09:07:39.194691] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:25:32.132 09:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:32.132 09:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:32.132 09:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.132 09:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:32.132 09:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:32.132 09:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:32.132 09:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:32.132 09:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.392 09:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:25:32.392 09:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:32.392 09:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:32.392 09:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:25:32.392 09:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:32.392 09:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:32.392 09:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:32.392 09:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:32.392 09:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.392 09:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:32.392 09:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:32.392 09:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.392 09:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:32.392 09:07:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:33.329 09:07:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:33.329 09:07:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:33.329 09:07:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:33.329 09:07:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:33.329 09:07:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.329 09:07:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:33.329 09:07:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:33.329 09:07:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.329 09:07:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:33.329 09:07:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:34.265 [2024-07-25 09:07:41.200621] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:34.265 [2024-07-25 09:07:41.200676] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:34.265 [2024-07-25 09:07:41.200732] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:34.265 [2024-07-25 09:07:41.206717] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:25:34.265 [2024-07-25 09:07:41.272604] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:34.265 [2024-07-25 09:07:41.272693] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:34.265 [2024-07-25 09:07:41.272763] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:34.265 [2024-07-25 09:07:41.272792] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:25:34.265 [2024-07-25 09:07:41.272808] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:34.265 [2024-07-25 09:07:41.279504] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x61500002b780 was disconnected and freed. delete nvme_qpair. 00:25:34.525 09:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:34.525 09:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:34.525 09:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:34.525 09:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.525 09:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:34.525 09:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:34.525 09:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:34.525 09:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.525 09:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:25:34.525 09:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:25:34.525 09:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 83335 00:25:34.525 09:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 83335 ']' 00:25:34.525 09:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 83335 00:25:34.525 09:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:25:34.525 09:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:34.525 09:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83335 00:25:34.525 killing process with pid 83335 00:25:34.525 09:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:34.525 09:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:34.525 09:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83335' 00:25:34.525 09:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 83335 00:25:34.525 09:07:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 83335 00:25:35.903 09:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:25:35.903 09:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:35.903 09:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:25:35.903 09:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:35.903 09:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:25:35.903 09:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:35.903 09:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:35.903 rmmod nvme_tcp 00:25:35.903 rmmod nvme_fabrics 00:25:35.903 rmmod nvme_keyring 00:25:35.903 09:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:35.903 09:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:25:35.903 09:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:25:35.903 09:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 83303 ']' 00:25:35.903 09:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 83303 00:25:35.903 09:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 83303 ']' 00:25:35.903 09:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 83303 00:25:35.903 09:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:25:35.903 09:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:35.903 09:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83303 00:25:35.903 killing process with pid 83303 00:25:35.903 09:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:35.903 09:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:35.903 09:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83303' 00:25:35.903 09:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 83303 00:25:35.903 09:07:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 83303 00:25:37.279 09:07:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:37.279 09:07:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:37.279 09:07:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:37.279 09:07:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:37.279 09:07:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:37.279 09:07:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:37.279 09:07:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:37.279 09:07:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:37.279 09:07:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:25:37.279 00:25:37.279 real 0m16.532s 00:25:37.279 user 0m27.729s 00:25:37.279 sys 0m2.719s 00:25:37.279 09:07:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:37.279 ************************************ 00:25:37.279 END TEST nvmf_discovery_remove_ifc 00:25:37.279 ************************************ 00:25:37.279 09:07:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:37.279 09:07:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:37.279 09:07:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:37.279 09:07:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:37.279 09:07:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.279 ************************************ 00:25:37.279 START TEST nvmf_identify_kernel_target 00:25:37.279 ************************************ 00:25:37.279 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:37.279 * Looking for test storage... 00:25:37.279 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:37.279 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:37.279 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:25:37.279 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:37.279 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:37.279 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:37.279 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:37.279 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:37.279 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:37.279 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:37.279 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:37.279 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:37.279 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:37.279 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:25:37.279 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:25:37.279 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:37.279 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:37.279 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:37.279 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:37.279 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:37.279 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:37.279 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:37.279 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:37.279 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.279 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.279 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.279 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:25:37.279 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.279 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:25:37.279 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:37.280 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:37.280 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:37.280 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:37.280 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:37.280 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:37.280 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:37.280 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:37.280 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:25:37.280 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:37.280 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:37.280 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:37.280 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:37.280 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:37.280 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:37.280 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:37.280 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:37.280 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:25:37.280 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:25:37.280 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:25:37.280 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:25:37.280 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:25:37.280 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:25:37.280 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:37.280 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:37.280 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:37.280 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:25:37.280 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:37.280 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:37.280 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:37.280 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:37.280 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:37.280 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:37.280 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:37.280 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:37.280 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:25:37.280 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:25:37.280 Cannot find device "nvmf_tgt_br" 00:25:37.280 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # true 00:25:37.280 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:25:37.539 Cannot find device "nvmf_tgt_br2" 00:25:37.539 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # true 00:25:37.539 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:25:37.539 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:25:37.539 Cannot find device "nvmf_tgt_br" 00:25:37.539 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # true 00:25:37.539 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:25:37.539 Cannot find device "nvmf_tgt_br2" 00:25:37.539 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # true 00:25:37.539 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:25:37.539 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:25:37.539 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:37.539 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:37.539 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:25:37.539 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:37.539 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:37.539 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:25:37.539 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:25:37.539 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:37.539 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:37.540 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:37.540 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:37.540 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:37.540 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:37.540 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:37.540 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:37.540 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:25:37.540 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:25:37.540 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:25:37.540 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:25:37.540 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:37.540 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:37.540 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:37.540 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:25:37.540 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:25:37.540 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:25:37.540 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:37.540 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:37.540 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:37.540 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:37.540 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:25:37.540 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:37.540 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:25:37.540 00:25:37.540 --- 10.0.0.2 ping statistics --- 00:25:37.540 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:37.540 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:25:37.540 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:25:37.540 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:37.540 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:25:37.540 00:25:37.540 --- 10.0.0.3 ping statistics --- 00:25:37.540 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:37.540 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:25:37.540 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:37.540 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:37.540 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:25:37.540 00:25:37.540 --- 10.0.0.1 ping statistics --- 00:25:37.540 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:37.540 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:25:37.540 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:37.540 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@433 -- # return 0 00:25:37.540 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:37.540 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:37.540 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:37.540 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:37.540 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:37.540 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:37.540 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:37.799 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:25:37.799 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:25:37.799 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:25:37.799 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:37.799 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:37.799 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.799 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.799 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:37.799 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.799 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:37.799 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:37.799 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:37.799 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:25:37.799 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:25:37.799 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:25:37.799 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:25:37.799 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:37.799 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:37.799 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:37.799 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:25:37.799 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:25:37.799 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:25:37.799 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:37.799 09:07:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:25:38.101 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:38.101 Waiting for block devices as requested 00:25:38.101 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:25:38.101 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:25:38.360 09:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:38.360 09:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:38.360 09:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:25:38.360 09:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:25:38.360 09:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:38.360 09:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:25:38.360 09:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:25:38.360 09:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:25:38.360 09:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:25:38.360 No valid GPT data, bailing 00:25:38.360 09:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:38.360 09:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:25:38.360 09:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:25:38.360 09:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:25:38.360 09:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:38.360 09:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:25:38.360 09:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:25:38.360 09:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:25:38.360 09:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:25:38.360 09:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:25:38.360 09:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:25:38.360 09:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:25:38.360 09:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:25:38.361 No valid GPT data, bailing 00:25:38.361 09:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:25:38.361 09:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:25:38.361 09:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:25:38.361 09:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:25:38.361 09:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:38.361 09:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:25:38.361 09:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:25:38.361 09:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:25:38.361 09:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:25:38.361 09:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:25:38.361 09:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:25:38.361 09:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:25:38.361 09:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:25:38.361 No valid GPT data, bailing 00:25:38.619 09:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:25:38.619 09:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:25:38.619 09:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:25:38.619 09:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:25:38.619 09:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:38.619 09:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:25:38.619 09:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:25:38.619 09:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:25:38.619 09:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:25:38.619 09:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:25:38.619 09:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:25:38.619 09:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:25:38.619 09:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:25:38.619 No valid GPT data, bailing 00:25:38.619 09:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:25:38.619 09:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:25:38.619 09:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:25:38.619 09:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:25:38.619 09:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:25:38.619 09:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:38.619 09:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:38.619 09:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:38.619 09:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:25:38.619 09:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:25:38.619 09:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:25:38.619 09:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:25:38.619 09:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:25:38.619 09:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:25:38.619 09:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:25:38.619 09:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:25:38.619 09:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:38.620 09:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid=a4705431-95c9-4bc1-9185-4a8233d2d7f5 -a 10.0.0.1 -t tcp -s 4420 00:25:38.620 00:25:38.620 Discovery Log Number of Records 2, Generation counter 2 00:25:38.620 =====Discovery Log Entry 0====== 00:25:38.620 trtype: tcp 00:25:38.620 adrfam: ipv4 00:25:38.620 subtype: current discovery subsystem 00:25:38.620 treq: not specified, sq flow control disable supported 00:25:38.620 portid: 1 00:25:38.620 trsvcid: 4420 00:25:38.620 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:38.620 traddr: 10.0.0.1 00:25:38.620 eflags: none 00:25:38.620 sectype: none 00:25:38.620 =====Discovery Log Entry 1====== 00:25:38.620 trtype: tcp 00:25:38.620 adrfam: ipv4 00:25:38.620 subtype: nvme subsystem 00:25:38.620 treq: not specified, sq flow control disable supported 00:25:38.620 portid: 1 00:25:38.620 trsvcid: 4420 00:25:38.620 subnqn: nqn.2016-06.io.spdk:testnqn 00:25:38.620 traddr: 10.0.0.1 00:25:38.620 eflags: none 00:25:38.620 sectype: none 00:25:38.620 09:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:25:38.620 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:25:38.879 ===================================================== 00:25:38.879 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:38.879 ===================================================== 00:25:38.879 Controller Capabilities/Features 00:25:38.879 ================================ 00:25:38.879 Vendor ID: 0000 00:25:38.879 Subsystem Vendor ID: 0000 00:25:38.879 Serial Number: 0124aab2fab6af017e92 00:25:38.879 Model Number: Linux 00:25:38.879 Firmware Version: 6.7.0-68 00:25:38.879 Recommended Arb Burst: 0 00:25:38.879 IEEE OUI Identifier: 00 00 00 00:25:38.879 Multi-path I/O 00:25:38.879 May have multiple subsystem ports: No 00:25:38.879 May have multiple controllers: No 00:25:38.879 Associated with SR-IOV VF: No 00:25:38.879 Max Data Transfer Size: Unlimited 00:25:38.879 Max Number of Namespaces: 0 00:25:38.879 Max Number of I/O Queues: 1024 00:25:38.879 NVMe Specification Version (VS): 1.3 00:25:38.879 NVMe Specification Version (Identify): 1.3 00:25:38.879 Maximum Queue Entries: 1024 00:25:38.879 Contiguous Queues Required: No 00:25:38.879 Arbitration Mechanisms Supported 00:25:38.879 Weighted Round Robin: Not Supported 00:25:38.879 Vendor Specific: Not Supported 00:25:38.879 Reset Timeout: 7500 ms 00:25:38.879 Doorbell Stride: 4 bytes 00:25:38.879 NVM Subsystem Reset: Not Supported 00:25:38.879 Command Sets Supported 00:25:38.879 NVM Command Set: Supported 00:25:38.879 Boot Partition: Not Supported 00:25:38.879 Memory Page Size Minimum: 4096 bytes 00:25:38.879 Memory Page Size Maximum: 4096 bytes 00:25:38.879 Persistent Memory Region: Not Supported 00:25:38.879 Optional Asynchronous Events Supported 00:25:38.879 Namespace Attribute Notices: Not Supported 00:25:38.879 Firmware Activation Notices: Not Supported 00:25:38.879 ANA Change Notices: Not Supported 00:25:38.879 PLE Aggregate Log Change Notices: Not Supported 00:25:38.879 LBA Status Info Alert Notices: Not Supported 00:25:38.879 EGE Aggregate Log Change Notices: Not Supported 00:25:38.879 Normal NVM Subsystem Shutdown event: Not Supported 00:25:38.879 Zone Descriptor Change Notices: Not Supported 00:25:38.879 Discovery Log Change Notices: Supported 00:25:38.879 Controller Attributes 00:25:38.879 128-bit Host Identifier: Not Supported 00:25:38.879 Non-Operational Permissive Mode: Not Supported 00:25:38.879 NVM Sets: Not Supported 00:25:38.879 Read Recovery Levels: Not Supported 00:25:38.879 Endurance Groups: Not Supported 00:25:38.879 Predictable Latency Mode: Not Supported 00:25:38.879 Traffic Based Keep ALive: Not Supported 00:25:38.879 Namespace Granularity: Not Supported 00:25:38.879 SQ Associations: Not Supported 00:25:38.879 UUID List: Not Supported 00:25:38.879 Multi-Domain Subsystem: Not Supported 00:25:38.879 Fixed Capacity Management: Not Supported 00:25:38.879 Variable Capacity Management: Not Supported 00:25:38.879 Delete Endurance Group: Not Supported 00:25:38.879 Delete NVM Set: Not Supported 00:25:38.879 Extended LBA Formats Supported: Not Supported 00:25:38.879 Flexible Data Placement Supported: Not Supported 00:25:38.879 00:25:38.879 Controller Memory Buffer Support 00:25:38.879 ================================ 00:25:38.879 Supported: No 00:25:38.879 00:25:38.879 Persistent Memory Region Support 00:25:38.879 ================================ 00:25:38.879 Supported: No 00:25:38.879 00:25:38.879 Admin Command Set Attributes 00:25:38.879 ============================ 00:25:38.879 Security Send/Receive: Not Supported 00:25:38.879 Format NVM: Not Supported 00:25:38.879 Firmware Activate/Download: Not Supported 00:25:38.879 Namespace Management: Not Supported 00:25:38.879 Device Self-Test: Not Supported 00:25:38.879 Directives: Not Supported 00:25:38.880 NVMe-MI: Not Supported 00:25:38.880 Virtualization Management: Not Supported 00:25:38.880 Doorbell Buffer Config: Not Supported 00:25:38.880 Get LBA Status Capability: Not Supported 00:25:38.880 Command & Feature Lockdown Capability: Not Supported 00:25:38.880 Abort Command Limit: 1 00:25:38.880 Async Event Request Limit: 1 00:25:38.880 Number of Firmware Slots: N/A 00:25:38.880 Firmware Slot 1 Read-Only: N/A 00:25:38.880 Firmware Activation Without Reset: N/A 00:25:38.880 Multiple Update Detection Support: N/A 00:25:38.880 Firmware Update Granularity: No Information Provided 00:25:38.880 Per-Namespace SMART Log: No 00:25:38.880 Asymmetric Namespace Access Log Page: Not Supported 00:25:38.880 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:38.880 Command Effects Log Page: Not Supported 00:25:38.880 Get Log Page Extended Data: Supported 00:25:38.880 Telemetry Log Pages: Not Supported 00:25:38.880 Persistent Event Log Pages: Not Supported 00:25:38.880 Supported Log Pages Log Page: May Support 00:25:38.880 Commands Supported & Effects Log Page: Not Supported 00:25:38.880 Feature Identifiers & Effects Log Page:May Support 00:25:38.880 NVMe-MI Commands & Effects Log Page: May Support 00:25:38.880 Data Area 4 for Telemetry Log: Not Supported 00:25:38.880 Error Log Page Entries Supported: 1 00:25:38.880 Keep Alive: Not Supported 00:25:38.880 00:25:38.880 NVM Command Set Attributes 00:25:38.880 ========================== 00:25:38.880 Submission Queue Entry Size 00:25:38.880 Max: 1 00:25:38.880 Min: 1 00:25:38.880 Completion Queue Entry Size 00:25:38.880 Max: 1 00:25:38.880 Min: 1 00:25:38.880 Number of Namespaces: 0 00:25:38.880 Compare Command: Not Supported 00:25:38.880 Write Uncorrectable Command: Not Supported 00:25:38.880 Dataset Management Command: Not Supported 00:25:38.880 Write Zeroes Command: Not Supported 00:25:38.880 Set Features Save Field: Not Supported 00:25:38.880 Reservations: Not Supported 00:25:38.880 Timestamp: Not Supported 00:25:38.880 Copy: Not Supported 00:25:38.880 Volatile Write Cache: Not Present 00:25:38.880 Atomic Write Unit (Normal): 1 00:25:38.880 Atomic Write Unit (PFail): 1 00:25:38.880 Atomic Compare & Write Unit: 1 00:25:38.880 Fused Compare & Write: Not Supported 00:25:38.880 Scatter-Gather List 00:25:38.880 SGL Command Set: Supported 00:25:38.880 SGL Keyed: Not Supported 00:25:38.880 SGL Bit Bucket Descriptor: Not Supported 00:25:38.880 SGL Metadata Pointer: Not Supported 00:25:38.880 Oversized SGL: Not Supported 00:25:38.880 SGL Metadata Address: Not Supported 00:25:38.880 SGL Offset: Supported 00:25:38.880 Transport SGL Data Block: Not Supported 00:25:38.880 Replay Protected Memory Block: Not Supported 00:25:38.880 00:25:38.880 Firmware Slot Information 00:25:38.880 ========================= 00:25:38.880 Active slot: 0 00:25:38.880 00:25:38.880 00:25:38.880 Error Log 00:25:38.880 ========= 00:25:38.880 00:25:38.880 Active Namespaces 00:25:38.880 ================= 00:25:38.880 Discovery Log Page 00:25:38.880 ================== 00:25:38.880 Generation Counter: 2 00:25:38.880 Number of Records: 2 00:25:38.880 Record Format: 0 00:25:38.880 00:25:38.880 Discovery Log Entry 0 00:25:38.880 ---------------------- 00:25:38.880 Transport Type: 3 (TCP) 00:25:38.880 Address Family: 1 (IPv4) 00:25:38.880 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:38.880 Entry Flags: 00:25:38.880 Duplicate Returned Information: 0 00:25:38.880 Explicit Persistent Connection Support for Discovery: 0 00:25:38.880 Transport Requirements: 00:25:38.880 Secure Channel: Not Specified 00:25:38.880 Port ID: 1 (0x0001) 00:25:38.880 Controller ID: 65535 (0xffff) 00:25:38.880 Admin Max SQ Size: 32 00:25:38.880 Transport Service Identifier: 4420 00:25:38.880 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:38.880 Transport Address: 10.0.0.1 00:25:38.880 Discovery Log Entry 1 00:25:38.880 ---------------------- 00:25:38.880 Transport Type: 3 (TCP) 00:25:38.880 Address Family: 1 (IPv4) 00:25:38.880 Subsystem Type: 2 (NVM Subsystem) 00:25:38.880 Entry Flags: 00:25:38.880 Duplicate Returned Information: 0 00:25:38.880 Explicit Persistent Connection Support for Discovery: 0 00:25:38.880 Transport Requirements: 00:25:38.880 Secure Channel: Not Specified 00:25:38.880 Port ID: 1 (0x0001) 00:25:38.880 Controller ID: 65535 (0xffff) 00:25:38.880 Admin Max SQ Size: 32 00:25:38.880 Transport Service Identifier: 4420 00:25:38.880 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:25:38.880 Transport Address: 10.0.0.1 00:25:38.880 09:07:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:39.140 get_feature(0x01) failed 00:25:39.140 get_feature(0x02) failed 00:25:39.140 get_feature(0x04) failed 00:25:39.140 ===================================================== 00:25:39.140 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:25:39.140 ===================================================== 00:25:39.140 Controller Capabilities/Features 00:25:39.140 ================================ 00:25:39.140 Vendor ID: 0000 00:25:39.140 Subsystem Vendor ID: 0000 00:25:39.140 Serial Number: c72799c82095350bcb10 00:25:39.140 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:25:39.140 Firmware Version: 6.7.0-68 00:25:39.140 Recommended Arb Burst: 6 00:25:39.140 IEEE OUI Identifier: 00 00 00 00:25:39.140 Multi-path I/O 00:25:39.140 May have multiple subsystem ports: Yes 00:25:39.140 May have multiple controllers: Yes 00:25:39.140 Associated with SR-IOV VF: No 00:25:39.140 Max Data Transfer Size: Unlimited 00:25:39.140 Max Number of Namespaces: 1024 00:25:39.140 Max Number of I/O Queues: 128 00:25:39.140 NVMe Specification Version (VS): 1.3 00:25:39.140 NVMe Specification Version (Identify): 1.3 00:25:39.140 Maximum Queue Entries: 1024 00:25:39.140 Contiguous Queues Required: No 00:25:39.140 Arbitration Mechanisms Supported 00:25:39.140 Weighted Round Robin: Not Supported 00:25:39.140 Vendor Specific: Not Supported 00:25:39.140 Reset Timeout: 7500 ms 00:25:39.140 Doorbell Stride: 4 bytes 00:25:39.140 NVM Subsystem Reset: Not Supported 00:25:39.140 Command Sets Supported 00:25:39.140 NVM Command Set: Supported 00:25:39.140 Boot Partition: Not Supported 00:25:39.140 Memory Page Size Minimum: 4096 bytes 00:25:39.140 Memory Page Size Maximum: 4096 bytes 00:25:39.140 Persistent Memory Region: Not Supported 00:25:39.140 Optional Asynchronous Events Supported 00:25:39.140 Namespace Attribute Notices: Supported 00:25:39.140 Firmware Activation Notices: Not Supported 00:25:39.140 ANA Change Notices: Supported 00:25:39.140 PLE Aggregate Log Change Notices: Not Supported 00:25:39.140 LBA Status Info Alert Notices: Not Supported 00:25:39.140 EGE Aggregate Log Change Notices: Not Supported 00:25:39.140 Normal NVM Subsystem Shutdown event: Not Supported 00:25:39.140 Zone Descriptor Change Notices: Not Supported 00:25:39.140 Discovery Log Change Notices: Not Supported 00:25:39.140 Controller Attributes 00:25:39.140 128-bit Host Identifier: Supported 00:25:39.140 Non-Operational Permissive Mode: Not Supported 00:25:39.140 NVM Sets: Not Supported 00:25:39.140 Read Recovery Levels: Not Supported 00:25:39.140 Endurance Groups: Not Supported 00:25:39.140 Predictable Latency Mode: Not Supported 00:25:39.140 Traffic Based Keep ALive: Supported 00:25:39.140 Namespace Granularity: Not Supported 00:25:39.140 SQ Associations: Not Supported 00:25:39.140 UUID List: Not Supported 00:25:39.140 Multi-Domain Subsystem: Not Supported 00:25:39.140 Fixed Capacity Management: Not Supported 00:25:39.140 Variable Capacity Management: Not Supported 00:25:39.140 Delete Endurance Group: Not Supported 00:25:39.140 Delete NVM Set: Not Supported 00:25:39.140 Extended LBA Formats Supported: Not Supported 00:25:39.140 Flexible Data Placement Supported: Not Supported 00:25:39.140 00:25:39.140 Controller Memory Buffer Support 00:25:39.140 ================================ 00:25:39.140 Supported: No 00:25:39.140 00:25:39.140 Persistent Memory Region Support 00:25:39.140 ================================ 00:25:39.140 Supported: No 00:25:39.140 00:25:39.140 Admin Command Set Attributes 00:25:39.140 ============================ 00:25:39.140 Security Send/Receive: Not Supported 00:25:39.140 Format NVM: Not Supported 00:25:39.140 Firmware Activate/Download: Not Supported 00:25:39.140 Namespace Management: Not Supported 00:25:39.140 Device Self-Test: Not Supported 00:25:39.140 Directives: Not Supported 00:25:39.140 NVMe-MI: Not Supported 00:25:39.140 Virtualization Management: Not Supported 00:25:39.140 Doorbell Buffer Config: Not Supported 00:25:39.140 Get LBA Status Capability: Not Supported 00:25:39.140 Command & Feature Lockdown Capability: Not Supported 00:25:39.140 Abort Command Limit: 4 00:25:39.140 Async Event Request Limit: 4 00:25:39.140 Number of Firmware Slots: N/A 00:25:39.140 Firmware Slot 1 Read-Only: N/A 00:25:39.140 Firmware Activation Without Reset: N/A 00:25:39.140 Multiple Update Detection Support: N/A 00:25:39.140 Firmware Update Granularity: No Information Provided 00:25:39.140 Per-Namespace SMART Log: Yes 00:25:39.140 Asymmetric Namespace Access Log Page: Supported 00:25:39.140 ANA Transition Time : 10 sec 00:25:39.140 00:25:39.140 Asymmetric Namespace Access Capabilities 00:25:39.140 ANA Optimized State : Supported 00:25:39.140 ANA Non-Optimized State : Supported 00:25:39.140 ANA Inaccessible State : Supported 00:25:39.140 ANA Persistent Loss State : Supported 00:25:39.140 ANA Change State : Supported 00:25:39.140 ANAGRPID is not changed : No 00:25:39.140 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:25:39.140 00:25:39.140 ANA Group Identifier Maximum : 128 00:25:39.140 Number of ANA Group Identifiers : 128 00:25:39.140 Max Number of Allowed Namespaces : 1024 00:25:39.140 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:25:39.140 Command Effects Log Page: Supported 00:25:39.140 Get Log Page Extended Data: Supported 00:25:39.140 Telemetry Log Pages: Not Supported 00:25:39.140 Persistent Event Log Pages: Not Supported 00:25:39.140 Supported Log Pages Log Page: May Support 00:25:39.140 Commands Supported & Effects Log Page: Not Supported 00:25:39.140 Feature Identifiers & Effects Log Page:May Support 00:25:39.140 NVMe-MI Commands & Effects Log Page: May Support 00:25:39.140 Data Area 4 for Telemetry Log: Not Supported 00:25:39.140 Error Log Page Entries Supported: 128 00:25:39.140 Keep Alive: Supported 00:25:39.140 Keep Alive Granularity: 1000 ms 00:25:39.140 00:25:39.140 NVM Command Set Attributes 00:25:39.140 ========================== 00:25:39.140 Submission Queue Entry Size 00:25:39.140 Max: 64 00:25:39.140 Min: 64 00:25:39.140 Completion Queue Entry Size 00:25:39.140 Max: 16 00:25:39.140 Min: 16 00:25:39.140 Number of Namespaces: 1024 00:25:39.140 Compare Command: Not Supported 00:25:39.140 Write Uncorrectable Command: Not Supported 00:25:39.140 Dataset Management Command: Supported 00:25:39.140 Write Zeroes Command: Supported 00:25:39.140 Set Features Save Field: Not Supported 00:25:39.140 Reservations: Not Supported 00:25:39.140 Timestamp: Not Supported 00:25:39.140 Copy: Not Supported 00:25:39.140 Volatile Write Cache: Present 00:25:39.140 Atomic Write Unit (Normal): 1 00:25:39.140 Atomic Write Unit (PFail): 1 00:25:39.140 Atomic Compare & Write Unit: 1 00:25:39.140 Fused Compare & Write: Not Supported 00:25:39.140 Scatter-Gather List 00:25:39.140 SGL Command Set: Supported 00:25:39.140 SGL Keyed: Not Supported 00:25:39.140 SGL Bit Bucket Descriptor: Not Supported 00:25:39.140 SGL Metadata Pointer: Not Supported 00:25:39.140 Oversized SGL: Not Supported 00:25:39.140 SGL Metadata Address: Not Supported 00:25:39.140 SGL Offset: Supported 00:25:39.140 Transport SGL Data Block: Not Supported 00:25:39.141 Replay Protected Memory Block: Not Supported 00:25:39.141 00:25:39.141 Firmware Slot Information 00:25:39.141 ========================= 00:25:39.141 Active slot: 0 00:25:39.141 00:25:39.141 Asymmetric Namespace Access 00:25:39.141 =========================== 00:25:39.141 Change Count : 0 00:25:39.141 Number of ANA Group Descriptors : 1 00:25:39.141 ANA Group Descriptor : 0 00:25:39.141 ANA Group ID : 1 00:25:39.141 Number of NSID Values : 1 00:25:39.141 Change Count : 0 00:25:39.141 ANA State : 1 00:25:39.141 Namespace Identifier : 1 00:25:39.141 00:25:39.141 Commands Supported and Effects 00:25:39.141 ============================== 00:25:39.141 Admin Commands 00:25:39.141 -------------- 00:25:39.141 Get Log Page (02h): Supported 00:25:39.141 Identify (06h): Supported 00:25:39.141 Abort (08h): Supported 00:25:39.141 Set Features (09h): Supported 00:25:39.141 Get Features (0Ah): Supported 00:25:39.141 Asynchronous Event Request (0Ch): Supported 00:25:39.141 Keep Alive (18h): Supported 00:25:39.141 I/O Commands 00:25:39.141 ------------ 00:25:39.141 Flush (00h): Supported 00:25:39.141 Write (01h): Supported LBA-Change 00:25:39.141 Read (02h): Supported 00:25:39.141 Write Zeroes (08h): Supported LBA-Change 00:25:39.141 Dataset Management (09h): Supported 00:25:39.141 00:25:39.141 Error Log 00:25:39.141 ========= 00:25:39.141 Entry: 0 00:25:39.141 Error Count: 0x3 00:25:39.141 Submission Queue Id: 0x0 00:25:39.141 Command Id: 0x5 00:25:39.141 Phase Bit: 0 00:25:39.141 Status Code: 0x2 00:25:39.141 Status Code Type: 0x0 00:25:39.141 Do Not Retry: 1 00:25:39.141 Error Location: 0x28 00:25:39.141 LBA: 0x0 00:25:39.141 Namespace: 0x0 00:25:39.141 Vendor Log Page: 0x0 00:25:39.141 ----------- 00:25:39.141 Entry: 1 00:25:39.141 Error Count: 0x2 00:25:39.141 Submission Queue Id: 0x0 00:25:39.141 Command Id: 0x5 00:25:39.141 Phase Bit: 0 00:25:39.141 Status Code: 0x2 00:25:39.141 Status Code Type: 0x0 00:25:39.141 Do Not Retry: 1 00:25:39.141 Error Location: 0x28 00:25:39.141 LBA: 0x0 00:25:39.141 Namespace: 0x0 00:25:39.141 Vendor Log Page: 0x0 00:25:39.141 ----------- 00:25:39.141 Entry: 2 00:25:39.141 Error Count: 0x1 00:25:39.141 Submission Queue Id: 0x0 00:25:39.141 Command Id: 0x4 00:25:39.141 Phase Bit: 0 00:25:39.141 Status Code: 0x2 00:25:39.141 Status Code Type: 0x0 00:25:39.141 Do Not Retry: 1 00:25:39.141 Error Location: 0x28 00:25:39.141 LBA: 0x0 00:25:39.141 Namespace: 0x0 00:25:39.141 Vendor Log Page: 0x0 00:25:39.141 00:25:39.141 Number of Queues 00:25:39.141 ================ 00:25:39.141 Number of I/O Submission Queues: 128 00:25:39.141 Number of I/O Completion Queues: 128 00:25:39.141 00:25:39.141 ZNS Specific Controller Data 00:25:39.141 ============================ 00:25:39.141 Zone Append Size Limit: 0 00:25:39.141 00:25:39.141 00:25:39.141 Active Namespaces 00:25:39.141 ================= 00:25:39.141 get_feature(0x05) failed 00:25:39.141 Namespace ID:1 00:25:39.141 Command Set Identifier: NVM (00h) 00:25:39.141 Deallocate: Supported 00:25:39.141 Deallocated/Unwritten Error: Not Supported 00:25:39.141 Deallocated Read Value: Unknown 00:25:39.141 Deallocate in Write Zeroes: Not Supported 00:25:39.141 Deallocated Guard Field: 0xFFFF 00:25:39.141 Flush: Supported 00:25:39.141 Reservation: Not Supported 00:25:39.141 Namespace Sharing Capabilities: Multiple Controllers 00:25:39.141 Size (in LBAs): 1310720 (5GiB) 00:25:39.141 Capacity (in LBAs): 1310720 (5GiB) 00:25:39.141 Utilization (in LBAs): 1310720 (5GiB) 00:25:39.141 UUID: c81b1c2c-74d3-49f2-9bbf-9ce8c6d04b95 00:25:39.141 Thin Provisioning: Not Supported 00:25:39.141 Per-NS Atomic Units: Yes 00:25:39.141 Atomic Boundary Size (Normal): 0 00:25:39.141 Atomic Boundary Size (PFail): 0 00:25:39.141 Atomic Boundary Offset: 0 00:25:39.141 NGUID/EUI64 Never Reused: No 00:25:39.141 ANA group ID: 1 00:25:39.141 Namespace Write Protected: No 00:25:39.141 Number of LBA Formats: 1 00:25:39.141 Current LBA Format: LBA Format #00 00:25:39.141 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:25:39.141 00:25:39.141 09:07:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:25:39.141 09:07:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:39.141 09:07:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:25:39.141 09:07:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:39.141 09:07:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:25:39.141 09:07:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:39.141 09:07:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:39.141 rmmod nvme_tcp 00:25:39.141 rmmod nvme_fabrics 00:25:39.400 09:07:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:39.400 09:07:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:25:39.400 09:07:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:25:39.400 09:07:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:25:39.400 09:07:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:39.400 09:07:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:39.400 09:07:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:39.400 09:07:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:39.400 09:07:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:39.400 09:07:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:39.400 09:07:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:39.400 09:07:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:39.400 09:07:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:25:39.400 09:07:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:25:39.400 09:07:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:25:39.400 09:07:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:25:39.400 09:07:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:39.400 09:07:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:39.400 09:07:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:39.400 09:07:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:39.400 09:07:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:25:39.400 09:07:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:25:39.400 09:07:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:39.968 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:39.968 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:25:40.226 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:25:40.226 ************************************ 00:25:40.226 END TEST nvmf_identify_kernel_target 00:25:40.226 ************************************ 00:25:40.226 00:25:40.226 real 0m2.953s 00:25:40.226 user 0m1.056s 00:25:40.226 sys 0m1.422s 00:25:40.226 09:07:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:40.226 09:07:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:40.226 09:07:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:40.226 09:07:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:40.226 09:07:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:40.226 09:07:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.226 ************************************ 00:25:40.226 START TEST nvmf_auth_host 00:25:40.226 ************************************ 00:25:40.226 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:40.226 * Looking for test storage... 00:25:40.226 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:40.226 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:40.226 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:25:40.226 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:40.226 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:40.226 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:40.226 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:40.226 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:40.485 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:40.485 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:40.485 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:40.486 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:40.486 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:40.486 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:25:40.486 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:25:40.486 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:40.486 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:40.486 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:40.486 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:40.486 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:40.486 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:40.486 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:40.486 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:40.486 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.486 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.486 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.486 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:25:40.486 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.486 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:25:40.486 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:40.486 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:40.486 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:40.486 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:40.486 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:40.486 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:40.486 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:40.486 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:40.486 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:25:40.486 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:25:40.486 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:25:40.486 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:25:40.486 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:40.486 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:40.486 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:25:40.486 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:25:40.486 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:25:40.486 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:40.486 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:40.486 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:40.486 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:40.486 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:40.486 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:40.486 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:40.486 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:40.486 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:25:40.486 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:25:40.486 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:25:40.486 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:25:40.486 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:25:40.486 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:25:40.486 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:40.486 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:40.486 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:40.486 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:25:40.486 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:40.486 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:40.486 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:40.486 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:40.486 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:40.486 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:40.486 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:40.486 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:40.486 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:25:40.486 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:25:40.486 Cannot find device "nvmf_tgt_br" 00:25:40.486 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # true 00:25:40.486 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:25:40.486 Cannot find device "nvmf_tgt_br2" 00:25:40.486 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # true 00:25:40.486 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:25:40.486 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:25:40.486 Cannot find device "nvmf_tgt_br" 00:25:40.486 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # true 00:25:40.486 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:25:40.486 Cannot find device "nvmf_tgt_br2" 00:25:40.486 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # true 00:25:40.486 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:25:40.486 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:25:40.487 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:40.487 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:40.487 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:25:40.487 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:40.487 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:40.487 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:25:40.487 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:25:40.487 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:40.487 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:40.487 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:40.487 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:40.487 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:40.487 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:40.487 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:40.487 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:40.487 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:25:40.746 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:25:40.746 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:25:40.746 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:25:40.746 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:40.746 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:40.746 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:40.746 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:25:40.746 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:25:40.746 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:25:40.746 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:40.746 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:40.746 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:40.746 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:40.746 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:25:40.746 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:40.746 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:25:40.746 00:25:40.746 --- 10.0.0.2 ping statistics --- 00:25:40.746 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:40.746 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:25:40.746 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:25:40.746 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:40.746 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:25:40.746 00:25:40.746 --- 10.0.0.3 ping statistics --- 00:25:40.746 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:40.746 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:25:40.746 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:40.746 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:40.746 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:25:40.746 00:25:40.746 --- 10.0.0.1 ping statistics --- 00:25:40.746 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:40.746 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:25:40.746 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:40.746 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@433 -- # return 0 00:25:40.746 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:40.746 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:40.746 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:40.746 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:40.746 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:40.746 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:40.746 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:40.746 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:25:40.746 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:40.746 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:40.746 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.747 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=84250 00:25:40.747 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 84250 00:25:40.747 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 84250 ']' 00:25:40.747 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:25:40.747 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:40.747 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:40.747 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:40.747 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:40.747 09:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.682 09:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:41.682 09:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:25:41.682 09:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:41.682 09:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:41.682 09:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.941 09:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:41.941 09:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:25:41.941 09:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:25:41.941 09:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:41.941 09:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:41.941 09:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:41.941 09:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:25:41.941 09:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:41.941 09:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:41.941 09:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=77cebb77fed427eb66e4770739d78991 00:25:41.942 09:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:41.942 09:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.swo 00:25:41.942 09:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 77cebb77fed427eb66e4770739d78991 0 00:25:41.942 09:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 77cebb77fed427eb66e4770739d78991 0 00:25:41.942 09:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:41.942 09:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:41.942 09:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=77cebb77fed427eb66e4770739d78991 00:25:41.942 09:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:25:41.942 09:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:41.942 09:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.swo 00:25:41.942 09:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.swo 00:25:41.942 09:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.swo 00:25:41.942 09:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:25:41.942 09:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:41.942 09:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:41.942 09:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:41.942 09:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:25:41.942 09:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:25:41.942 09:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:41.942 09:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=35b31f10053d620ddc24f47457ad91e09a81301bdfe1e0266efe16edfa1f8a27 00:25:41.942 09:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:25:41.942 09:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.rZQ 00:25:41.942 09:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 35b31f10053d620ddc24f47457ad91e09a81301bdfe1e0266efe16edfa1f8a27 3 00:25:41.942 09:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 35b31f10053d620ddc24f47457ad91e09a81301bdfe1e0266efe16edfa1f8a27 3 00:25:41.942 09:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:41.942 09:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:41.942 09:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=35b31f10053d620ddc24f47457ad91e09a81301bdfe1e0266efe16edfa1f8a27 00:25:41.942 09:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:25:41.942 09:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:41.942 09:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.rZQ 00:25:41.942 09:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.rZQ 00:25:41.942 09:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.rZQ 00:25:41.942 09:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:25:41.942 09:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:41.942 09:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:41.942 09:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:41.942 09:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:25:41.942 09:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:25:41.942 09:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:41.942 09:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a09d4389ae02d021b67c160ce3fb7985c7285b6df0da6261 00:25:41.942 09:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:41.942 09:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.aQL 00:25:41.942 09:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a09d4389ae02d021b67c160ce3fb7985c7285b6df0da6261 0 00:25:41.942 09:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a09d4389ae02d021b67c160ce3fb7985c7285b6df0da6261 0 00:25:41.942 09:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:41.942 09:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:41.942 09:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a09d4389ae02d021b67c160ce3fb7985c7285b6df0da6261 00:25:41.942 09:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:25:41.942 09:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:41.942 09:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.aQL 00:25:41.942 09:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.aQL 00:25:41.942 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.aQL 00:25:41.942 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:25:41.942 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:41.942 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:41.942 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:41.942 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:25:41.942 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:25:41.942 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:41.942 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=2d94eebc2b33af651c5d9779a7b4b462329e566afcef21c4 00:25:41.942 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:25:41.942 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.1mA 00:25:41.942 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 2d94eebc2b33af651c5d9779a7b4b462329e566afcef21c4 2 00:25:41.942 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 2d94eebc2b33af651c5d9779a7b4b462329e566afcef21c4 2 00:25:41.942 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:41.942 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:41.942 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=2d94eebc2b33af651c5d9779a7b4b462329e566afcef21c4 00:25:41.942 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:25:41.942 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:42.201 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.1mA 00:25:42.201 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.1mA 00:25:42.201 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.1mA 00:25:42.201 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:42.201 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:42.201 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:42.201 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:42.201 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:25:42.201 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:42.201 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:42.201 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d8d81ac19c239387982cda89508cb3bf 00:25:42.201 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:25:42.201 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.BVS 00:25:42.201 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d8d81ac19c239387982cda89508cb3bf 1 00:25:42.201 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d8d81ac19c239387982cda89508cb3bf 1 00:25:42.201 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:42.201 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:42.201 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d8d81ac19c239387982cda89508cb3bf 00:25:42.201 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:25:42.201 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:42.201 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.BVS 00:25:42.201 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.BVS 00:25:42.201 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.BVS 00:25:42.201 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:42.201 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:42.201 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:42.202 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:42.202 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:25:42.202 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:42.202 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:42.202 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b53bbeef153964d25ba50472df50eda6 00:25:42.202 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:25:42.202 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Kbo 00:25:42.202 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b53bbeef153964d25ba50472df50eda6 1 00:25:42.202 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b53bbeef153964d25ba50472df50eda6 1 00:25:42.202 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:42.202 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:42.202 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b53bbeef153964d25ba50472df50eda6 00:25:42.202 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:25:42.202 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:42.202 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Kbo 00:25:42.202 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Kbo 00:25:42.202 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.Kbo 00:25:42.202 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:25:42.202 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:42.202 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:42.202 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:42.202 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:25:42.202 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:25:42.202 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:42.202 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=dbcc30d27eec7ec7b022cbafc08d7bbefae3db174586c79f 00:25:42.202 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:25:42.202 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.vnm 00:25:42.202 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key dbcc30d27eec7ec7b022cbafc08d7bbefae3db174586c79f 2 00:25:42.202 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 dbcc30d27eec7ec7b022cbafc08d7bbefae3db174586c79f 2 00:25:42.202 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:42.202 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:42.202 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=dbcc30d27eec7ec7b022cbafc08d7bbefae3db174586c79f 00:25:42.202 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:25:42.202 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:42.202 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.vnm 00:25:42.202 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.vnm 00:25:42.202 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.vnm 00:25:42.202 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:25:42.202 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:42.202 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:42.202 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:42.202 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:25:42.202 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:42.202 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:42.202 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a8edf27720ae956ea891a5deebb5394b 00:25:42.202 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:42.202 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.jEX 00:25:42.202 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a8edf27720ae956ea891a5deebb5394b 0 00:25:42.202 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a8edf27720ae956ea891a5deebb5394b 0 00:25:42.202 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:42.202 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:42.202 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a8edf27720ae956ea891a5deebb5394b 00:25:42.202 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:25:42.202 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:42.460 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.jEX 00:25:42.460 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.jEX 00:25:42.461 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.jEX 00:25:42.461 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:25:42.461 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:42.461 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:42.461 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:42.461 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:25:42.461 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:25:42.461 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:42.461 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=4811393e6ca21a01f6cbe730907f628c43381a04af1de48330e8820910295d22 00:25:42.461 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:25:42.461 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.i4s 00:25:42.461 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 4811393e6ca21a01f6cbe730907f628c43381a04af1de48330e8820910295d22 3 00:25:42.461 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 4811393e6ca21a01f6cbe730907f628c43381a04af1de48330e8820910295d22 3 00:25:42.461 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:42.461 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:42.461 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=4811393e6ca21a01f6cbe730907f628c43381a04af1de48330e8820910295d22 00:25:42.461 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:25:42.461 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:42.461 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.i4s 00:25:42.461 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.i4s 00:25:42.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:42.461 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.i4s 00:25:42.461 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:25:42.461 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 84250 00:25:42.461 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 84250 ']' 00:25:42.461 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:42.461 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:42.461 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:42.461 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:42.461 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.720 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:42.720 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:25:42.720 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:42.720 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.swo 00:25:42.720 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.720 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.720 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.720 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.rZQ ]] 00:25:42.720 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.rZQ 00:25:42.720 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.720 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.720 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.720 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:42.720 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.aQL 00:25:42.720 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.720 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.720 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.720 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.1mA ]] 00:25:42.720 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.1mA 00:25:42.720 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.720 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.720 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.720 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:42.720 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.BVS 00:25:42.720 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.720 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.720 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.720 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.Kbo ]] 00:25:42.720 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Kbo 00:25:42.720 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.720 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.720 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.720 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:42.720 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.vnm 00:25:42.720 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.720 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.720 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.720 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.jEX ]] 00:25:42.720 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.jEX 00:25:42.720 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.720 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.720 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.720 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:42.720 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.i4s 00:25:42.720 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.720 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.721 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.721 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:25:42.721 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:25:42.721 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:25:42.721 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:42.721 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:42.721 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:42.721 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.721 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.721 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:42.721 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.721 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:42.721 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:42.721 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:42.721 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:25:42.721 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:25:42.721 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:25:42.721 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:42.721 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:42.721 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:42.721 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:25:42.721 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:25:42.721 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:25:42.721 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:42.721 09:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:25:42.980 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:42.980 Waiting for block devices as requested 00:25:43.239 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:25:43.239 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:25:43.807 09:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:43.807 09:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:43.807 09:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:25:43.807 09:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:25:43.807 09:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:43.807 09:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:25:43.807 09:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:25:43.807 09:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:25:43.807 09:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:25:43.807 No valid GPT data, bailing 00:25:43.807 09:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:43.807 09:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:25:43.807 09:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:25:43.807 09:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:25:43.807 09:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:43.807 09:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:25:43.807 09:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:25:43.807 09:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:25:43.807 09:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:25:43.807 09:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:25:43.807 09:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:25:43.807 09:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:25:43.807 09:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:25:44.067 No valid GPT data, bailing 00:25:44.067 09:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:25:44.067 09:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:25:44.067 09:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:25:44.067 09:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:25:44.067 09:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:44.067 09:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:25:44.067 09:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:25:44.067 09:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:25:44.067 09:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:25:44.067 09:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:25:44.067 09:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:25:44.067 09:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:25:44.067 09:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:25:44.067 No valid GPT data, bailing 00:25:44.067 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:25:44.067 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:25:44.067 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:25:44.067 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:25:44.067 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:44.067 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:25:44.067 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:25:44.067 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:25:44.067 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:25:44.067 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:25:44.067 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:25:44.067 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:25:44.067 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:25:44.067 No valid GPT data, bailing 00:25:44.067 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:25:44.067 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:25:44.067 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:25:44.067 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:25:44.067 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:25:44.067 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:44.067 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:44.067 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:44.067 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:25:44.067 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:25:44.067 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:25:44.067 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:25:44.067 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:25:44.067 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:25:44.067 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:25:44.067 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:25:44.067 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:44.067 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid=a4705431-95c9-4bc1-9185-4a8233d2d7f5 -a 10.0.0.1 -t tcp -s 4420 00:25:44.067 00:25:44.067 Discovery Log Number of Records 2, Generation counter 2 00:25:44.067 =====Discovery Log Entry 0====== 00:25:44.067 trtype: tcp 00:25:44.067 adrfam: ipv4 00:25:44.067 subtype: current discovery subsystem 00:25:44.067 treq: not specified, sq flow control disable supported 00:25:44.067 portid: 1 00:25:44.067 trsvcid: 4420 00:25:44.067 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:44.067 traddr: 10.0.0.1 00:25:44.067 eflags: none 00:25:44.067 sectype: none 00:25:44.067 =====Discovery Log Entry 1====== 00:25:44.067 trtype: tcp 00:25:44.067 adrfam: ipv4 00:25:44.067 subtype: nvme subsystem 00:25:44.067 treq: not specified, sq flow control disable supported 00:25:44.067 portid: 1 00:25:44.067 trsvcid: 4420 00:25:44.067 subnqn: nqn.2024-02.io.spdk:cnode0 00:25:44.067 traddr: 10.0.0.1 00:25:44.067 eflags: none 00:25:44.067 sectype: none 00:25:44.067 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:44.067 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:25:44.067 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:44.067 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:44.067 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:44.067 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:44.067 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:44.067 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:44.067 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTA5ZDQzODlhZTAyZDAyMWI2N2MxNjBjZTNmYjc5ODVjNzI4NWI2ZGYwZGE2MjYxmzzaRg==: 00:25:44.067 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmQ5NGVlYmMyYjMzYWY2NTFjNWQ5Nzc5YTdiNGI0NjIzMjllNTY2YWZjZWYyMWM0y4TMNw==: 00:25:44.067 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:44.067 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:44.327 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTA5ZDQzODlhZTAyZDAyMWI2N2MxNjBjZTNmYjc5ODVjNzI4NWI2ZGYwZGE2MjYxmzzaRg==: 00:25:44.327 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmQ5NGVlYmMyYjMzYWY2NTFjNWQ5Nzc5YTdiNGI0NjIzMjllNTY2YWZjZWYyMWM0y4TMNw==: ]] 00:25:44.327 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmQ5NGVlYmMyYjMzYWY2NTFjNWQ5Nzc5YTdiNGI0NjIzMjllNTY2YWZjZWYyMWM0y4TMNw==: 00:25:44.327 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:44.327 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:25:44.327 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:44.327 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:44.327 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:25:44.327 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:44.327 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:25:44.327 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:44.327 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:44.327 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:44.327 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:44.327 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.327 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.327 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.327 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:44.327 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:44.327 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:44.327 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:44.327 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:44.327 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:44.327 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:44.327 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:44.327 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:44.327 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:44.327 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:44.327 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:44.327 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.327 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.327 nvme0n1 00:25:44.327 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.327 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:44.327 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:44.327 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.327 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.327 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.327 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:44.327 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:44.327 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.327 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.587 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.587 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:44.587 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:44.587 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:44.587 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:25:44.587 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:44.587 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:44.587 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:44.587 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:44.587 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzdjZWJiNzdmZWQ0MjdlYjY2ZTQ3NzA3MzlkNzg5OTHjrpjW: 00:25:44.587 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzViMzFmMTAwNTNkNjIwZGRjMjRmNDc0NTdhZDkxZTA5YTgxMzAxYmRmZTFlMDI2NmVmZTE2ZWRmYTFmOGEyN0me1cQ=: 00:25:44.587 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:44.587 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:44.587 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzdjZWJiNzdmZWQ0MjdlYjY2ZTQ3NzA3MzlkNzg5OTHjrpjW: 00:25:44.587 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzViMzFmMTAwNTNkNjIwZGRjMjRmNDc0NTdhZDkxZTA5YTgxMzAxYmRmZTFlMDI2NmVmZTE2ZWRmYTFmOGEyN0me1cQ=: ]] 00:25:44.587 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzViMzFmMTAwNTNkNjIwZGRjMjRmNDc0NTdhZDkxZTA5YTgxMzAxYmRmZTFlMDI2NmVmZTE2ZWRmYTFmOGEyN0me1cQ=: 00:25:44.587 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:25:44.587 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:44.587 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:44.587 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:44.587 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:44.587 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:44.587 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:44.587 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.587 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.587 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.587 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:44.587 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:44.587 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:44.587 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:44.587 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:44.587 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:44.587 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:44.587 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:44.587 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:44.587 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:44.587 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:44.587 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:44.587 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.587 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.587 nvme0n1 00:25:44.587 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.587 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:44.587 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.587 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:44.587 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.587 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.587 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:44.587 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:44.587 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.587 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.587 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.587 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:44.587 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:44.587 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:44.587 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:44.587 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:44.587 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:44.587 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTA5ZDQzODlhZTAyZDAyMWI2N2MxNjBjZTNmYjc5ODVjNzI4NWI2ZGYwZGE2MjYxmzzaRg==: 00:25:44.587 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmQ5NGVlYmMyYjMzYWY2NTFjNWQ5Nzc5YTdiNGI0NjIzMjllNTY2YWZjZWYyMWM0y4TMNw==: 00:25:44.587 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:44.587 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:44.588 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTA5ZDQzODlhZTAyZDAyMWI2N2MxNjBjZTNmYjc5ODVjNzI4NWI2ZGYwZGE2MjYxmzzaRg==: 00:25:44.588 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmQ5NGVlYmMyYjMzYWY2NTFjNWQ5Nzc5YTdiNGI0NjIzMjllNTY2YWZjZWYyMWM0y4TMNw==: ]] 00:25:44.588 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmQ5NGVlYmMyYjMzYWY2NTFjNWQ5Nzc5YTdiNGI0NjIzMjllNTY2YWZjZWYyMWM0y4TMNw==: 00:25:44.588 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:25:44.588 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:44.588 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:44.588 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:44.588 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:44.588 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:44.588 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:44.588 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.588 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.588 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.588 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:44.588 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:44.588 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:44.588 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:44.588 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:44.588 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:44.588 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:44.588 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:44.588 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:44.588 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:44.588 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:44.588 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:44.588 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.588 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.847 nvme0n1 00:25:44.847 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.847 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:44.847 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:44.847 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.847 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.847 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.847 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:44.847 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:44.847 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.847 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.847 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.847 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:44.847 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:44.847 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:44.847 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:44.847 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:44.847 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:44.847 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDhkODFhYzE5YzIzOTM4Nzk4MmNkYTg5NTA4Y2IzYmbgcYOH: 00:25:44.847 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjUzYmJlZWYxNTM5NjRkMjViYTUwNDcyZGY1MGVkYTb6RiE+: 00:25:44.847 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:44.847 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:44.847 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDhkODFhYzE5YzIzOTM4Nzk4MmNkYTg5NTA4Y2IzYmbgcYOH: 00:25:44.847 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjUzYmJlZWYxNTM5NjRkMjViYTUwNDcyZGY1MGVkYTb6RiE+: ]] 00:25:44.847 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjUzYmJlZWYxNTM5NjRkMjViYTUwNDcyZGY1MGVkYTb6RiE+: 00:25:44.847 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:25:44.847 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:44.847 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:44.847 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:44.847 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:44.847 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:44.847 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:44.847 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.847 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.847 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.847 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:44.847 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:44.847 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:44.847 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:44.847 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:44.847 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:44.847 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:44.847 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:44.847 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:44.847 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:44.847 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:44.847 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:44.847 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.847 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.847 nvme0n1 00:25:44.847 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.847 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:44.847 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.847 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.847 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:44.847 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.107 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.107 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:45.107 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.107 09:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.107 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.107 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:45.107 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:25:45.107 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:45.107 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:45.107 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:45.107 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:45.107 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGJjYzMwZDI3ZWVjN2VjN2IwMjJjYmFmYzA4ZDdiYmVmYWUzZGIxNzQ1ODZjNzlmEwFWNg==: 00:25:45.107 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YThlZGYyNzcyMGFlOTU2ZWE4OTFhNWRlZWJiNTM5NGIi5RZF: 00:25:45.107 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:45.107 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:45.107 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGJjYzMwZDI3ZWVjN2VjN2IwMjJjYmFmYzA4ZDdiYmVmYWUzZGIxNzQ1ODZjNzlmEwFWNg==: 00:25:45.107 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YThlZGYyNzcyMGFlOTU2ZWE4OTFhNWRlZWJiNTM5NGIi5RZF: ]] 00:25:45.107 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YThlZGYyNzcyMGFlOTU2ZWE4OTFhNWRlZWJiNTM5NGIi5RZF: 00:25:45.107 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:25:45.107 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:45.107 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:45.107 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:45.107 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:45.107 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:45.107 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:45.107 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.107 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.107 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.107 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:45.108 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:45.108 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:45.108 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:45.108 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.108 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.108 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:45.108 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.108 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:45.108 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:45.108 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:45.108 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:45.108 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.108 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.108 nvme0n1 00:25:45.108 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.108 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:45.108 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:45.108 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.108 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.108 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.108 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.108 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:45.108 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.108 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.108 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.108 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:45.108 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:25:45.108 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:45.108 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:45.108 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:45.108 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:45.108 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDgxMTM5M2U2Y2EyMWEwMWY2Y2JlNzMwOTA3ZjYyOGM0MzM4MWEwNGFmMWRlNDgzMzBlODgyMDkxMDI5NWQyMtYn6Y4=: 00:25:45.108 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:45.108 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:45.108 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:45.108 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDgxMTM5M2U2Y2EyMWEwMWY2Y2JlNzMwOTA3ZjYyOGM0MzM4MWEwNGFmMWRlNDgzMzBlODgyMDkxMDI5NWQyMtYn6Y4=: 00:25:45.108 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:45.108 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:25:45.108 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:45.108 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:45.108 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:45.108 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:45.108 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:45.108 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:45.108 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.108 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.108 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.108 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:45.108 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:45.108 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:45.108 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:45.108 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.108 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.108 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:45.108 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.108 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:45.108 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:45.108 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:45.108 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:45.108 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.108 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.368 nvme0n1 00:25:45.368 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.368 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:45.368 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:45.368 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.368 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.368 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.368 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.368 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:45.368 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.368 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.368 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.368 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:45.368 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:45.368 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:25:45.368 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:45.368 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:45.368 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:45.368 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:45.368 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzdjZWJiNzdmZWQ0MjdlYjY2ZTQ3NzA3MzlkNzg5OTHjrpjW: 00:25:45.368 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzViMzFmMTAwNTNkNjIwZGRjMjRmNDc0NTdhZDkxZTA5YTgxMzAxYmRmZTFlMDI2NmVmZTE2ZWRmYTFmOGEyN0me1cQ=: 00:25:45.368 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:45.368 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:45.627 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzdjZWJiNzdmZWQ0MjdlYjY2ZTQ3NzA3MzlkNzg5OTHjrpjW: 00:25:45.627 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzViMzFmMTAwNTNkNjIwZGRjMjRmNDc0NTdhZDkxZTA5YTgxMzAxYmRmZTFlMDI2NmVmZTE2ZWRmYTFmOGEyN0me1cQ=: ]] 00:25:45.628 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzViMzFmMTAwNTNkNjIwZGRjMjRmNDc0NTdhZDkxZTA5YTgxMzAxYmRmZTFlMDI2NmVmZTE2ZWRmYTFmOGEyN0me1cQ=: 00:25:45.628 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:25:45.628 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:45.628 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:45.628 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:45.628 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:45.628 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:45.628 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:45.628 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.628 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.628 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.628 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:45.628 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:45.628 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:45.628 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:45.628 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.628 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.628 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:45.628 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.628 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:45.628 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:45.628 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:45.628 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:45.628 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.628 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.887 nvme0n1 00:25:45.887 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.887 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:45.887 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.887 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.887 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:45.887 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.887 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.887 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:45.887 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.887 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.887 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.887 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:45.887 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:25:45.887 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:45.887 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:45.887 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:45.887 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:45.887 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTA5ZDQzODlhZTAyZDAyMWI2N2MxNjBjZTNmYjc5ODVjNzI4NWI2ZGYwZGE2MjYxmzzaRg==: 00:25:45.887 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmQ5NGVlYmMyYjMzYWY2NTFjNWQ5Nzc5YTdiNGI0NjIzMjllNTY2YWZjZWYyMWM0y4TMNw==: 00:25:45.887 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:45.887 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:45.887 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTA5ZDQzODlhZTAyZDAyMWI2N2MxNjBjZTNmYjc5ODVjNzI4NWI2ZGYwZGE2MjYxmzzaRg==: 00:25:45.887 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmQ5NGVlYmMyYjMzYWY2NTFjNWQ5Nzc5YTdiNGI0NjIzMjllNTY2YWZjZWYyMWM0y4TMNw==: ]] 00:25:45.887 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmQ5NGVlYmMyYjMzYWY2NTFjNWQ5Nzc5YTdiNGI0NjIzMjllNTY2YWZjZWYyMWM0y4TMNw==: 00:25:45.887 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:25:45.887 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:45.887 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:45.887 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:45.887 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:45.888 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:45.888 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:45.888 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.888 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.888 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.888 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:45.888 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:45.888 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:45.888 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:45.888 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.888 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.888 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:45.888 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.888 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:45.888 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:45.888 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:45.888 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:45.888 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.888 09:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.147 nvme0n1 00:25:46.147 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.147 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:46.147 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:46.147 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.147 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.147 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.147 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:46.147 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:46.147 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.147 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.147 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.147 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:46.147 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:25:46.147 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:46.147 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:46.147 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:46.147 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:46.147 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDhkODFhYzE5YzIzOTM4Nzk4MmNkYTg5NTA4Y2IzYmbgcYOH: 00:25:46.147 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjUzYmJlZWYxNTM5NjRkMjViYTUwNDcyZGY1MGVkYTb6RiE+: 00:25:46.147 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:46.147 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:46.147 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDhkODFhYzE5YzIzOTM4Nzk4MmNkYTg5NTA4Y2IzYmbgcYOH: 00:25:46.147 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjUzYmJlZWYxNTM5NjRkMjViYTUwNDcyZGY1MGVkYTb6RiE+: ]] 00:25:46.147 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjUzYmJlZWYxNTM5NjRkMjViYTUwNDcyZGY1MGVkYTb6RiE+: 00:25:46.147 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:25:46.147 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:46.147 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:46.147 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:46.147 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:46.147 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:46.147 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:46.147 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.147 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.147 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.147 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:46.147 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:46.147 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:46.147 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:46.147 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:46.147 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:46.147 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:46.147 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:46.147 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:46.147 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:46.148 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:46.148 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:46.148 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.148 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.148 nvme0n1 00:25:46.148 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.148 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:46.148 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.148 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:46.148 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.148 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.415 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:46.415 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:46.415 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.415 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.415 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.415 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:46.415 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:25:46.415 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:46.415 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:46.415 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:46.415 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:46.415 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGJjYzMwZDI3ZWVjN2VjN2IwMjJjYmFmYzA4ZDdiYmVmYWUzZGIxNzQ1ODZjNzlmEwFWNg==: 00:25:46.415 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YThlZGYyNzcyMGFlOTU2ZWE4OTFhNWRlZWJiNTM5NGIi5RZF: 00:25:46.415 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:46.415 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:46.415 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGJjYzMwZDI3ZWVjN2VjN2IwMjJjYmFmYzA4ZDdiYmVmYWUzZGIxNzQ1ODZjNzlmEwFWNg==: 00:25:46.415 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YThlZGYyNzcyMGFlOTU2ZWE4OTFhNWRlZWJiNTM5NGIi5RZF: ]] 00:25:46.415 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YThlZGYyNzcyMGFlOTU2ZWE4OTFhNWRlZWJiNTM5NGIi5RZF: 00:25:46.415 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:25:46.415 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:46.415 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:46.415 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:46.415 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:46.415 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:46.415 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:46.415 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.415 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.415 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.415 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:46.415 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:46.415 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:46.415 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:46.415 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:46.415 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:46.415 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:46.415 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:46.415 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:46.415 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:46.415 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:46.415 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:46.415 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.415 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.415 nvme0n1 00:25:46.415 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.415 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:46.415 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:46.415 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.415 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.415 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.415 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:46.415 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:46.415 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.415 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.415 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.415 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:46.415 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:25:46.415 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:46.415 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:46.415 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:46.415 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:46.415 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDgxMTM5M2U2Y2EyMWEwMWY2Y2JlNzMwOTA3ZjYyOGM0MzM4MWEwNGFmMWRlNDgzMzBlODgyMDkxMDI5NWQyMtYn6Y4=: 00:25:46.415 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:46.415 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:46.415 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:46.415 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDgxMTM5M2U2Y2EyMWEwMWY2Y2JlNzMwOTA3ZjYyOGM0MzM4MWEwNGFmMWRlNDgzMzBlODgyMDkxMDI5NWQyMtYn6Y4=: 00:25:46.415 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:46.415 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:25:46.415 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:46.415 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:46.415 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:46.415 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:46.415 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:46.415 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:46.415 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.415 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.415 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.415 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:46.415 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:46.415 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:46.415 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:46.415 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:46.415 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:46.415 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:46.415 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:46.415 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:46.415 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:46.415 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:46.719 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:46.719 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.719 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.719 nvme0n1 00:25:46.719 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.719 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:46.719 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.719 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:46.719 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.719 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.719 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:46.719 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:46.719 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.719 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.719 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.719 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:46.719 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:46.719 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:25:46.719 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:46.719 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:46.719 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:46.719 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:46.719 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzdjZWJiNzdmZWQ0MjdlYjY2ZTQ3NzA3MzlkNzg5OTHjrpjW: 00:25:46.719 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzViMzFmMTAwNTNkNjIwZGRjMjRmNDc0NTdhZDkxZTA5YTgxMzAxYmRmZTFlMDI2NmVmZTE2ZWRmYTFmOGEyN0me1cQ=: 00:25:46.719 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:46.719 09:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:47.287 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzdjZWJiNzdmZWQ0MjdlYjY2ZTQ3NzA3MzlkNzg5OTHjrpjW: 00:25:47.287 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzViMzFmMTAwNTNkNjIwZGRjMjRmNDc0NTdhZDkxZTA5YTgxMzAxYmRmZTFlMDI2NmVmZTE2ZWRmYTFmOGEyN0me1cQ=: ]] 00:25:47.287 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzViMzFmMTAwNTNkNjIwZGRjMjRmNDc0NTdhZDkxZTA5YTgxMzAxYmRmZTFlMDI2NmVmZTE2ZWRmYTFmOGEyN0me1cQ=: 00:25:47.287 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:25:47.287 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:47.287 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:47.287 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:47.287 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:47.287 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:47.287 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:47.287 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.287 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.287 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.287 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:47.287 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:47.287 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:47.287 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:47.287 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:47.287 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:47.287 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:47.287 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:47.287 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:47.287 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:47.287 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:47.287 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:47.287 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.287 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.547 nvme0n1 00:25:47.547 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.547 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:47.547 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:47.547 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.547 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.547 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.547 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:47.547 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:47.547 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.547 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.547 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.547 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:47.547 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:25:47.547 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:47.547 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:47.547 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:47.547 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:47.547 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTA5ZDQzODlhZTAyZDAyMWI2N2MxNjBjZTNmYjc5ODVjNzI4NWI2ZGYwZGE2MjYxmzzaRg==: 00:25:47.547 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmQ5NGVlYmMyYjMzYWY2NTFjNWQ5Nzc5YTdiNGI0NjIzMjllNTY2YWZjZWYyMWM0y4TMNw==: 00:25:47.547 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:47.547 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:47.547 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTA5ZDQzODlhZTAyZDAyMWI2N2MxNjBjZTNmYjc5ODVjNzI4NWI2ZGYwZGE2MjYxmzzaRg==: 00:25:47.547 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmQ5NGVlYmMyYjMzYWY2NTFjNWQ5Nzc5YTdiNGI0NjIzMjllNTY2YWZjZWYyMWM0y4TMNw==: ]] 00:25:47.547 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmQ5NGVlYmMyYjMzYWY2NTFjNWQ5Nzc5YTdiNGI0NjIzMjllNTY2YWZjZWYyMWM0y4TMNw==: 00:25:47.547 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:25:47.547 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:47.547 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:47.547 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:47.547 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:47.547 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:47.547 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:47.547 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.547 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.547 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.547 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:47.547 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:47.547 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:47.547 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:47.547 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:47.547 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:47.547 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:47.547 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:47.547 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:47.547 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:47.547 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:47.547 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:47.547 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.547 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.806 nvme0n1 00:25:47.806 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.806 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:47.806 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.806 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:47.806 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.806 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.806 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:47.806 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:47.806 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.806 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.806 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.806 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:47.806 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:25:47.806 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:47.806 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:47.806 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:47.806 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:47.806 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDhkODFhYzE5YzIzOTM4Nzk4MmNkYTg5NTA4Y2IzYmbgcYOH: 00:25:47.806 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjUzYmJlZWYxNTM5NjRkMjViYTUwNDcyZGY1MGVkYTb6RiE+: 00:25:47.806 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:47.806 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:47.806 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDhkODFhYzE5YzIzOTM4Nzk4MmNkYTg5NTA4Y2IzYmbgcYOH: 00:25:47.807 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjUzYmJlZWYxNTM5NjRkMjViYTUwNDcyZGY1MGVkYTb6RiE+: ]] 00:25:47.807 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjUzYmJlZWYxNTM5NjRkMjViYTUwNDcyZGY1MGVkYTb6RiE+: 00:25:47.807 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:25:47.807 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:47.807 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:47.807 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:47.807 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:47.807 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:47.807 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:47.807 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.807 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.807 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.807 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:47.807 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:47.807 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:47.807 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:47.807 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:47.807 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:47.807 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:47.807 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:47.807 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:47.807 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:47.807 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:47.807 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:47.807 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.807 09:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.066 nvme0n1 00:25:48.066 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.066 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:48.066 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.066 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:48.066 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.066 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.066 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:48.066 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:48.066 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.066 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.066 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.066 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:48.066 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:25:48.066 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:48.066 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:48.066 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:48.066 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:48.066 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGJjYzMwZDI3ZWVjN2VjN2IwMjJjYmFmYzA4ZDdiYmVmYWUzZGIxNzQ1ODZjNzlmEwFWNg==: 00:25:48.066 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YThlZGYyNzcyMGFlOTU2ZWE4OTFhNWRlZWJiNTM5NGIi5RZF: 00:25:48.066 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:48.066 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:48.066 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGJjYzMwZDI3ZWVjN2VjN2IwMjJjYmFmYzA4ZDdiYmVmYWUzZGIxNzQ1ODZjNzlmEwFWNg==: 00:25:48.066 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YThlZGYyNzcyMGFlOTU2ZWE4OTFhNWRlZWJiNTM5NGIi5RZF: ]] 00:25:48.066 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YThlZGYyNzcyMGFlOTU2ZWE4OTFhNWRlZWJiNTM5NGIi5RZF: 00:25:48.066 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:25:48.066 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:48.066 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:48.066 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:48.066 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:48.066 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:48.066 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:48.066 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.066 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.066 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.066 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:48.066 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:48.066 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:48.066 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:48.066 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:48.066 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:48.066 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:48.066 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:48.066 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:48.066 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:48.066 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:48.066 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:48.066 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.066 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.324 nvme0n1 00:25:48.324 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.324 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:48.324 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.324 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:48.324 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.324 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.324 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:48.324 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:48.324 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.324 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.324 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.324 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:48.324 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:25:48.325 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:48.325 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:48.325 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:48.325 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:48.325 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDgxMTM5M2U2Y2EyMWEwMWY2Y2JlNzMwOTA3ZjYyOGM0MzM4MWEwNGFmMWRlNDgzMzBlODgyMDkxMDI5NWQyMtYn6Y4=: 00:25:48.325 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:48.325 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:48.325 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:48.325 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDgxMTM5M2U2Y2EyMWEwMWY2Y2JlNzMwOTA3ZjYyOGM0MzM4MWEwNGFmMWRlNDgzMzBlODgyMDkxMDI5NWQyMtYn6Y4=: 00:25:48.325 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:48.325 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:25:48.325 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:48.325 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:48.325 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:48.325 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:48.325 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:48.325 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:48.325 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.325 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.325 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.325 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:48.325 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:48.325 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:48.325 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:48.325 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:48.325 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:48.325 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:48.325 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:48.325 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:48.325 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:48.325 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:48.325 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:48.325 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.325 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.583 nvme0n1 00:25:48.583 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.583 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:48.583 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:48.583 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.583 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.583 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.583 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:48.583 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:48.583 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.583 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.583 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.583 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:48.583 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:48.583 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:25:48.583 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:48.583 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:48.583 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:48.583 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:48.583 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzdjZWJiNzdmZWQ0MjdlYjY2ZTQ3NzA3MzlkNzg5OTHjrpjW: 00:25:48.583 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzViMzFmMTAwNTNkNjIwZGRjMjRmNDc0NTdhZDkxZTA5YTgxMzAxYmRmZTFlMDI2NmVmZTE2ZWRmYTFmOGEyN0me1cQ=: 00:25:48.583 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:48.583 09:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:50.484 09:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzdjZWJiNzdmZWQ0MjdlYjY2ZTQ3NzA3MzlkNzg5OTHjrpjW: 00:25:50.484 09:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzViMzFmMTAwNTNkNjIwZGRjMjRmNDc0NTdhZDkxZTA5YTgxMzAxYmRmZTFlMDI2NmVmZTE2ZWRmYTFmOGEyN0me1cQ=: ]] 00:25:50.484 09:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzViMzFmMTAwNTNkNjIwZGRjMjRmNDc0NTdhZDkxZTA5YTgxMzAxYmRmZTFlMDI2NmVmZTE2ZWRmYTFmOGEyN0me1cQ=: 00:25:50.484 09:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:25:50.484 09:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:50.484 09:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:50.484 09:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:50.484 09:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:50.484 09:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:50.484 09:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:50.484 09:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.484 09:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.484 09:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.484 09:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:50.484 09:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:50.484 09:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:50.484 09:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:50.484 09:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:50.484 09:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:50.484 09:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:50.484 09:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:50.484 09:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:50.484 09:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:50.484 09:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:50.484 09:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:50.484 09:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.484 09:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.743 nvme0n1 00:25:50.743 09:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.743 09:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:50.743 09:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.743 09:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:50.743 09:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.743 09:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.743 09:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:50.743 09:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:50.743 09:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.743 09:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.743 09:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.743 09:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:50.743 09:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:25:50.743 09:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:50.743 09:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:50.743 09:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:50.743 09:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:50.743 09:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTA5ZDQzODlhZTAyZDAyMWI2N2MxNjBjZTNmYjc5ODVjNzI4NWI2ZGYwZGE2MjYxmzzaRg==: 00:25:50.743 09:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmQ5NGVlYmMyYjMzYWY2NTFjNWQ5Nzc5YTdiNGI0NjIzMjllNTY2YWZjZWYyMWM0y4TMNw==: 00:25:50.743 09:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:50.743 09:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:50.743 09:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTA5ZDQzODlhZTAyZDAyMWI2N2MxNjBjZTNmYjc5ODVjNzI4NWI2ZGYwZGE2MjYxmzzaRg==: 00:25:50.743 09:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmQ5NGVlYmMyYjMzYWY2NTFjNWQ5Nzc5YTdiNGI0NjIzMjllNTY2YWZjZWYyMWM0y4TMNw==: ]] 00:25:50.743 09:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmQ5NGVlYmMyYjMzYWY2NTFjNWQ5Nzc5YTdiNGI0NjIzMjllNTY2YWZjZWYyMWM0y4TMNw==: 00:25:50.743 09:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:25:50.743 09:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:50.743 09:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:50.743 09:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:50.743 09:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:50.743 09:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:50.743 09:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:50.743 09:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.743 09:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.743 09:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.743 09:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:50.743 09:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:50.743 09:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:50.743 09:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:50.743 09:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:50.743 09:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:50.743 09:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:50.743 09:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:50.743 09:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:50.743 09:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:50.743 09:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:50.743 09:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:50.743 09:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.743 09:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.335 nvme0n1 00:25:51.335 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.335 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:51.335 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:51.335 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.335 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.335 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.335 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:51.335 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:51.335 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.335 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.335 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.335 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:51.335 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:25:51.335 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:51.335 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:51.335 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:51.335 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:51.335 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDhkODFhYzE5YzIzOTM4Nzk4MmNkYTg5NTA4Y2IzYmbgcYOH: 00:25:51.335 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjUzYmJlZWYxNTM5NjRkMjViYTUwNDcyZGY1MGVkYTb6RiE+: 00:25:51.335 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:51.335 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:51.335 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDhkODFhYzE5YzIzOTM4Nzk4MmNkYTg5NTA4Y2IzYmbgcYOH: 00:25:51.335 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjUzYmJlZWYxNTM5NjRkMjViYTUwNDcyZGY1MGVkYTb6RiE+: ]] 00:25:51.335 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjUzYmJlZWYxNTM5NjRkMjViYTUwNDcyZGY1MGVkYTb6RiE+: 00:25:51.335 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:25:51.335 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:51.335 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:51.335 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:51.335 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:51.335 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:51.335 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:51.335 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.335 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.335 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.335 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:51.335 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:51.335 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:51.336 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:51.336 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:51.336 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:51.336 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:51.336 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:51.336 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:51.336 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:51.336 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:51.336 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:51.336 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.336 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.594 nvme0n1 00:25:51.594 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.594 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:51.594 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.594 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:51.594 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.594 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.594 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:51.594 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:51.594 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.594 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.852 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.852 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:51.852 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:25:51.852 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:51.852 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:51.852 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:51.852 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:51.852 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGJjYzMwZDI3ZWVjN2VjN2IwMjJjYmFmYzA4ZDdiYmVmYWUzZGIxNzQ1ODZjNzlmEwFWNg==: 00:25:51.852 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YThlZGYyNzcyMGFlOTU2ZWE4OTFhNWRlZWJiNTM5NGIi5RZF: 00:25:51.852 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:51.852 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:51.852 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGJjYzMwZDI3ZWVjN2VjN2IwMjJjYmFmYzA4ZDdiYmVmYWUzZGIxNzQ1ODZjNzlmEwFWNg==: 00:25:51.852 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YThlZGYyNzcyMGFlOTU2ZWE4OTFhNWRlZWJiNTM5NGIi5RZF: ]] 00:25:51.852 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YThlZGYyNzcyMGFlOTU2ZWE4OTFhNWRlZWJiNTM5NGIi5RZF: 00:25:51.852 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:25:51.852 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:51.852 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:51.852 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:51.852 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:51.852 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:51.852 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:51.852 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.852 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.852 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.852 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:51.852 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:51.852 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:51.852 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:51.852 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:51.852 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:51.852 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:51.852 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:51.852 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:51.852 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:51.852 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:51.853 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:51.853 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.853 09:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.111 nvme0n1 00:25:52.111 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.111 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:52.111 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.111 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:52.111 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.111 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.111 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:52.111 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:52.111 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.111 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.111 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.111 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:52.111 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:25:52.111 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:52.111 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:52.111 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:52.111 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:52.111 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDgxMTM5M2U2Y2EyMWEwMWY2Y2JlNzMwOTA3ZjYyOGM0MzM4MWEwNGFmMWRlNDgzMzBlODgyMDkxMDI5NWQyMtYn6Y4=: 00:25:52.112 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:52.112 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:52.112 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:52.112 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDgxMTM5M2U2Y2EyMWEwMWY2Y2JlNzMwOTA3ZjYyOGM0MzM4MWEwNGFmMWRlNDgzMzBlODgyMDkxMDI5NWQyMtYn6Y4=: 00:25:52.112 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:52.112 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:25:52.112 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:52.112 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:52.112 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:52.112 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:52.112 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:52.112 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:52.112 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.112 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.112 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.112 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:52.112 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:52.112 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:52.112 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:52.112 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:52.112 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:52.112 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:52.112 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:52.112 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:52.112 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:52.112 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:52.112 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:52.112 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.112 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.679 nvme0n1 00:25:52.679 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.679 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:52.679 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.679 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:52.679 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.679 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.679 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:52.679 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:52.679 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.679 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.679 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.680 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:52.680 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:52.680 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:25:52.680 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:52.680 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:52.680 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:52.680 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:52.680 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzdjZWJiNzdmZWQ0MjdlYjY2ZTQ3NzA3MzlkNzg5OTHjrpjW: 00:25:52.680 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzViMzFmMTAwNTNkNjIwZGRjMjRmNDc0NTdhZDkxZTA5YTgxMzAxYmRmZTFlMDI2NmVmZTE2ZWRmYTFmOGEyN0me1cQ=: 00:25:52.680 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:52.680 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:52.680 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzdjZWJiNzdmZWQ0MjdlYjY2ZTQ3NzA3MzlkNzg5OTHjrpjW: 00:25:52.680 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzViMzFmMTAwNTNkNjIwZGRjMjRmNDc0NTdhZDkxZTA5YTgxMzAxYmRmZTFlMDI2NmVmZTE2ZWRmYTFmOGEyN0me1cQ=: ]] 00:25:52.680 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzViMzFmMTAwNTNkNjIwZGRjMjRmNDc0NTdhZDkxZTA5YTgxMzAxYmRmZTFlMDI2NmVmZTE2ZWRmYTFmOGEyN0me1cQ=: 00:25:52.680 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:25:52.680 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:52.680 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:52.680 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:52.680 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:52.680 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:52.680 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:52.680 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.680 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.680 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.680 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:52.680 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:52.680 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:52.680 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:52.680 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:52.680 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:52.680 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:52.680 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:52.680 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:52.680 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:52.680 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:52.680 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:52.680 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.680 09:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.247 nvme0n1 00:25:53.248 09:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.248 09:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:53.248 09:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:53.248 09:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.248 09:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.248 09:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.248 09:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:53.248 09:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:53.248 09:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.248 09:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.248 09:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.248 09:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:53.248 09:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:25:53.248 09:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:53.248 09:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:53.248 09:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:53.248 09:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:53.248 09:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTA5ZDQzODlhZTAyZDAyMWI2N2MxNjBjZTNmYjc5ODVjNzI4NWI2ZGYwZGE2MjYxmzzaRg==: 00:25:53.248 09:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmQ5NGVlYmMyYjMzYWY2NTFjNWQ5Nzc5YTdiNGI0NjIzMjllNTY2YWZjZWYyMWM0y4TMNw==: 00:25:53.248 09:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:53.248 09:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:53.248 09:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTA5ZDQzODlhZTAyZDAyMWI2N2MxNjBjZTNmYjc5ODVjNzI4NWI2ZGYwZGE2MjYxmzzaRg==: 00:25:53.248 09:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmQ5NGVlYmMyYjMzYWY2NTFjNWQ5Nzc5YTdiNGI0NjIzMjllNTY2YWZjZWYyMWM0y4TMNw==: ]] 00:25:53.248 09:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmQ5NGVlYmMyYjMzYWY2NTFjNWQ5Nzc5YTdiNGI0NjIzMjllNTY2YWZjZWYyMWM0y4TMNw==: 00:25:53.248 09:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:25:53.248 09:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:53.248 09:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:53.248 09:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:53.248 09:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:53.248 09:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:53.248 09:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:53.248 09:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.248 09:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.248 09:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.248 09:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:53.248 09:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:53.248 09:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:53.248 09:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:53.248 09:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:53.248 09:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:53.248 09:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:53.248 09:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:53.248 09:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:53.248 09:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:53.248 09:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:53.248 09:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:53.248 09:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.248 09:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.815 nvme0n1 00:25:53.815 09:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.815 09:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:53.815 09:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:53.815 09:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.815 09:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.075 09:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.075 09:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:54.075 09:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:54.075 09:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.075 09:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.075 09:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.075 09:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:54.075 09:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:25:54.075 09:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:54.075 09:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:54.075 09:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:54.075 09:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:54.075 09:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDhkODFhYzE5YzIzOTM4Nzk4MmNkYTg5NTA4Y2IzYmbgcYOH: 00:25:54.075 09:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjUzYmJlZWYxNTM5NjRkMjViYTUwNDcyZGY1MGVkYTb6RiE+: 00:25:54.075 09:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:54.075 09:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:54.075 09:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDhkODFhYzE5YzIzOTM4Nzk4MmNkYTg5NTA4Y2IzYmbgcYOH: 00:25:54.075 09:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjUzYmJlZWYxNTM5NjRkMjViYTUwNDcyZGY1MGVkYTb6RiE+: ]] 00:25:54.075 09:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjUzYmJlZWYxNTM5NjRkMjViYTUwNDcyZGY1MGVkYTb6RiE+: 00:25:54.075 09:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:25:54.075 09:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:54.075 09:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:54.075 09:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:54.075 09:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:54.075 09:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:54.075 09:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:54.075 09:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.075 09:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.075 09:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.075 09:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:54.075 09:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:54.075 09:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:54.075 09:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:54.075 09:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:54.075 09:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:54.075 09:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:54.075 09:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:54.075 09:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:54.075 09:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:54.075 09:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:54.075 09:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:54.075 09:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.075 09:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.642 nvme0n1 00:25:54.642 09:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.642 09:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:54.642 09:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:54.642 09:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.642 09:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.642 09:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.642 09:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:54.642 09:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:54.642 09:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.642 09:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.642 09:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.642 09:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:54.642 09:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:25:54.642 09:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:54.642 09:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:54.642 09:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:54.642 09:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:54.642 09:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGJjYzMwZDI3ZWVjN2VjN2IwMjJjYmFmYzA4ZDdiYmVmYWUzZGIxNzQ1ODZjNzlmEwFWNg==: 00:25:54.642 09:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YThlZGYyNzcyMGFlOTU2ZWE4OTFhNWRlZWJiNTM5NGIi5RZF: 00:25:54.642 09:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:54.642 09:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:54.642 09:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGJjYzMwZDI3ZWVjN2VjN2IwMjJjYmFmYzA4ZDdiYmVmYWUzZGIxNzQ1ODZjNzlmEwFWNg==: 00:25:54.642 09:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YThlZGYyNzcyMGFlOTU2ZWE4OTFhNWRlZWJiNTM5NGIi5RZF: ]] 00:25:54.642 09:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YThlZGYyNzcyMGFlOTU2ZWE4OTFhNWRlZWJiNTM5NGIi5RZF: 00:25:54.642 09:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:25:54.642 09:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:54.642 09:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:54.642 09:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:54.642 09:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:54.642 09:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:54.642 09:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:54.642 09:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.642 09:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.642 09:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.642 09:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:54.642 09:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:54.642 09:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:54.642 09:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:54.642 09:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:54.642 09:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:54.642 09:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:54.642 09:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:54.643 09:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:54.643 09:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:54.643 09:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:54.643 09:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:54.643 09:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.643 09:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.258 nvme0n1 00:25:55.258 09:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.258 09:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:55.258 09:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:55.258 09:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.258 09:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.258 09:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.517 09:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:55.517 09:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:55.517 09:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.517 09:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.517 09:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.517 09:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:55.517 09:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:25:55.517 09:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:55.517 09:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:55.517 09:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:55.517 09:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:55.517 09:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDgxMTM5M2U2Y2EyMWEwMWY2Y2JlNzMwOTA3ZjYyOGM0MzM4MWEwNGFmMWRlNDgzMzBlODgyMDkxMDI5NWQyMtYn6Y4=: 00:25:55.517 09:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:55.517 09:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:55.517 09:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:55.517 09:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDgxMTM5M2U2Y2EyMWEwMWY2Y2JlNzMwOTA3ZjYyOGM0MzM4MWEwNGFmMWRlNDgzMzBlODgyMDkxMDI5NWQyMtYn6Y4=: 00:25:55.517 09:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:55.517 09:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:25:55.517 09:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:55.517 09:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:55.517 09:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:55.517 09:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:55.517 09:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:55.517 09:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:55.517 09:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.517 09:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.517 09:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.517 09:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:55.517 09:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:55.517 09:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:55.517 09:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:55.517 09:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:55.517 09:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:55.517 09:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:55.517 09:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:55.517 09:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:55.517 09:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:55.517 09:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:55.517 09:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:55.517 09:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.517 09:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.085 nvme0n1 00:25:56.085 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.085 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:56.085 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.085 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.085 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:56.085 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.085 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:56.085 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:56.085 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.085 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.085 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.086 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:56.086 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:56.086 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:56.086 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:25:56.086 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:56.086 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:56.086 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:56.086 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:56.086 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzdjZWJiNzdmZWQ0MjdlYjY2ZTQ3NzA3MzlkNzg5OTHjrpjW: 00:25:56.086 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzViMzFmMTAwNTNkNjIwZGRjMjRmNDc0NTdhZDkxZTA5YTgxMzAxYmRmZTFlMDI2NmVmZTE2ZWRmYTFmOGEyN0me1cQ=: 00:25:56.086 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:56.086 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:56.086 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzdjZWJiNzdmZWQ0MjdlYjY2ZTQ3NzA3MzlkNzg5OTHjrpjW: 00:25:56.086 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzViMzFmMTAwNTNkNjIwZGRjMjRmNDc0NTdhZDkxZTA5YTgxMzAxYmRmZTFlMDI2NmVmZTE2ZWRmYTFmOGEyN0me1cQ=: ]] 00:25:56.086 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzViMzFmMTAwNTNkNjIwZGRjMjRmNDc0NTdhZDkxZTA5YTgxMzAxYmRmZTFlMDI2NmVmZTE2ZWRmYTFmOGEyN0me1cQ=: 00:25:56.086 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:25:56.086 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:56.086 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:56.086 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:56.086 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:56.086 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:56.086 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:56.086 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.086 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.086 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.086 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:56.086 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:56.086 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:56.086 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:56.086 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:56.086 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:56.086 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:56.086 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:56.086 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:56.086 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:56.086 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:56.086 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:56.086 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.086 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.345 nvme0n1 00:25:56.345 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.345 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:56.345 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:56.345 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.345 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.345 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.345 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:56.345 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:56.345 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.345 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.345 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.345 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:56.345 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:25:56.345 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:56.345 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:56.345 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:56.345 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:56.345 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTA5ZDQzODlhZTAyZDAyMWI2N2MxNjBjZTNmYjc5ODVjNzI4NWI2ZGYwZGE2MjYxmzzaRg==: 00:25:56.346 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmQ5NGVlYmMyYjMzYWY2NTFjNWQ5Nzc5YTdiNGI0NjIzMjllNTY2YWZjZWYyMWM0y4TMNw==: 00:25:56.346 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:56.346 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:56.346 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTA5ZDQzODlhZTAyZDAyMWI2N2MxNjBjZTNmYjc5ODVjNzI4NWI2ZGYwZGE2MjYxmzzaRg==: 00:25:56.346 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmQ5NGVlYmMyYjMzYWY2NTFjNWQ5Nzc5YTdiNGI0NjIzMjllNTY2YWZjZWYyMWM0y4TMNw==: ]] 00:25:56.346 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmQ5NGVlYmMyYjMzYWY2NTFjNWQ5Nzc5YTdiNGI0NjIzMjllNTY2YWZjZWYyMWM0y4TMNw==: 00:25:56.346 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:25:56.346 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:56.346 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:56.346 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:56.346 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:56.346 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:56.346 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:56.346 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.346 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.346 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.346 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:56.346 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:56.346 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:56.346 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:56.346 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:56.346 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:56.346 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:56.346 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:56.346 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:56.346 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:56.346 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:56.346 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:56.346 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.346 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.346 nvme0n1 00:25:56.346 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.346 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:56.346 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:56.346 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.346 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.346 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.346 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:56.346 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:56.346 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.346 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.346 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.346 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:56.346 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:25:56.346 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:56.346 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:56.346 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:56.346 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:56.346 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDhkODFhYzE5YzIzOTM4Nzk4MmNkYTg5NTA4Y2IzYmbgcYOH: 00:25:56.346 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjUzYmJlZWYxNTM5NjRkMjViYTUwNDcyZGY1MGVkYTb6RiE+: 00:25:56.346 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:56.346 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:56.346 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDhkODFhYzE5YzIzOTM4Nzk4MmNkYTg5NTA4Y2IzYmbgcYOH: 00:25:56.346 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjUzYmJlZWYxNTM5NjRkMjViYTUwNDcyZGY1MGVkYTb6RiE+: ]] 00:25:56.346 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjUzYmJlZWYxNTM5NjRkMjViYTUwNDcyZGY1MGVkYTb6RiE+: 00:25:56.346 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:25:56.346 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:56.346 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:56.346 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:56.346 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:56.346 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:56.346 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:56.346 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.346 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.605 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.605 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:56.605 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:56.605 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:56.605 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:56.605 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:56.605 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:56.605 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:56.605 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:56.605 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:56.605 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:56.605 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:56.605 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:56.605 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.605 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.605 nvme0n1 00:25:56.605 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.605 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:56.605 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.605 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:56.605 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.605 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.605 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:56.605 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:56.605 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.605 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.605 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.605 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:56.605 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:25:56.605 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:56.605 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:56.605 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:56.605 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:56.605 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGJjYzMwZDI3ZWVjN2VjN2IwMjJjYmFmYzA4ZDdiYmVmYWUzZGIxNzQ1ODZjNzlmEwFWNg==: 00:25:56.605 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YThlZGYyNzcyMGFlOTU2ZWE4OTFhNWRlZWJiNTM5NGIi5RZF: 00:25:56.605 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:56.605 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:56.605 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGJjYzMwZDI3ZWVjN2VjN2IwMjJjYmFmYzA4ZDdiYmVmYWUzZGIxNzQ1ODZjNzlmEwFWNg==: 00:25:56.605 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YThlZGYyNzcyMGFlOTU2ZWE4OTFhNWRlZWJiNTM5NGIi5RZF: ]] 00:25:56.605 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YThlZGYyNzcyMGFlOTU2ZWE4OTFhNWRlZWJiNTM5NGIi5RZF: 00:25:56.605 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:25:56.605 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:56.605 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:56.605 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:56.605 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:56.605 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:56.605 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:56.605 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.605 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.605 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.605 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:56.605 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:56.605 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:56.605 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:56.605 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:56.605 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:56.605 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:56.605 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:56.605 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:56.605 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:56.605 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:56.605 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:56.605 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.605 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.864 nvme0n1 00:25:56.864 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.864 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:56.864 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.864 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:56.864 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.864 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.864 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:56.864 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:56.864 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.864 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.864 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.864 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:56.864 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:25:56.864 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:56.864 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:56.864 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:56.864 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:56.864 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDgxMTM5M2U2Y2EyMWEwMWY2Y2JlNzMwOTA3ZjYyOGM0MzM4MWEwNGFmMWRlNDgzMzBlODgyMDkxMDI5NWQyMtYn6Y4=: 00:25:56.864 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:56.864 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:56.864 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:56.864 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDgxMTM5M2U2Y2EyMWEwMWY2Y2JlNzMwOTA3ZjYyOGM0MzM4MWEwNGFmMWRlNDgzMzBlODgyMDkxMDI5NWQyMtYn6Y4=: 00:25:56.864 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:56.864 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:25:56.865 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:56.865 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:56.865 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:56.865 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:56.865 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:56.865 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:56.865 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.865 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.865 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.865 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:56.865 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:56.865 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:56.865 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:56.865 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:56.865 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:56.865 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:56.865 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:56.865 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:56.865 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:56.865 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:56.865 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:56.865 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.865 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.865 nvme0n1 00:25:56.865 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.865 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:56.865 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:56.865 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.865 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.865 09:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.124 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:57.124 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:57.124 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.124 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.124 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.124 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:57.124 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:57.125 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:25:57.125 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:57.125 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:57.125 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:57.125 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:57.125 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzdjZWJiNzdmZWQ0MjdlYjY2ZTQ3NzA3MzlkNzg5OTHjrpjW: 00:25:57.125 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzViMzFmMTAwNTNkNjIwZGRjMjRmNDc0NTdhZDkxZTA5YTgxMzAxYmRmZTFlMDI2NmVmZTE2ZWRmYTFmOGEyN0me1cQ=: 00:25:57.125 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:57.125 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:57.125 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzdjZWJiNzdmZWQ0MjdlYjY2ZTQ3NzA3MzlkNzg5OTHjrpjW: 00:25:57.125 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzViMzFmMTAwNTNkNjIwZGRjMjRmNDc0NTdhZDkxZTA5YTgxMzAxYmRmZTFlMDI2NmVmZTE2ZWRmYTFmOGEyN0me1cQ=: ]] 00:25:57.125 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzViMzFmMTAwNTNkNjIwZGRjMjRmNDc0NTdhZDkxZTA5YTgxMzAxYmRmZTFlMDI2NmVmZTE2ZWRmYTFmOGEyN0me1cQ=: 00:25:57.125 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:25:57.125 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:57.125 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:57.125 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:57.125 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:57.125 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:57.125 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:57.125 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.125 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.125 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.125 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:57.125 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:57.125 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:57.125 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:57.125 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:57.125 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:57.125 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:57.125 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:57.125 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:57.125 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:57.125 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:57.125 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:57.125 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.125 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.125 nvme0n1 00:25:57.125 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.125 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:57.125 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.125 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.125 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:57.125 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.125 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:57.125 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:57.125 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.125 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.125 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.125 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:57.125 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:25:57.125 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:57.125 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:57.125 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:57.125 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:57.125 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTA5ZDQzODlhZTAyZDAyMWI2N2MxNjBjZTNmYjc5ODVjNzI4NWI2ZGYwZGE2MjYxmzzaRg==: 00:25:57.125 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmQ5NGVlYmMyYjMzYWY2NTFjNWQ5Nzc5YTdiNGI0NjIzMjllNTY2YWZjZWYyMWM0y4TMNw==: 00:25:57.125 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:57.125 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:57.125 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTA5ZDQzODlhZTAyZDAyMWI2N2MxNjBjZTNmYjc5ODVjNzI4NWI2ZGYwZGE2MjYxmzzaRg==: 00:25:57.125 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmQ5NGVlYmMyYjMzYWY2NTFjNWQ5Nzc5YTdiNGI0NjIzMjllNTY2YWZjZWYyMWM0y4TMNw==: ]] 00:25:57.125 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmQ5NGVlYmMyYjMzYWY2NTFjNWQ5Nzc5YTdiNGI0NjIzMjllNTY2YWZjZWYyMWM0y4TMNw==: 00:25:57.125 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:25:57.125 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:57.385 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:57.385 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:57.385 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:57.385 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:57.385 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:57.385 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.385 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.385 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.385 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:57.385 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:57.385 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:57.385 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:57.385 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:57.385 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:57.385 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:57.385 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:57.385 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:57.385 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:57.385 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:57.385 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:57.385 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.385 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.385 nvme0n1 00:25:57.385 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.385 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:57.385 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.385 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.385 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:57.385 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.385 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:57.385 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:57.385 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.385 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.385 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.385 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:57.385 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:25:57.385 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:57.385 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:57.385 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:57.385 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:57.385 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDhkODFhYzE5YzIzOTM4Nzk4MmNkYTg5NTA4Y2IzYmbgcYOH: 00:25:57.385 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjUzYmJlZWYxNTM5NjRkMjViYTUwNDcyZGY1MGVkYTb6RiE+: 00:25:57.385 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:57.385 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:57.385 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDhkODFhYzE5YzIzOTM4Nzk4MmNkYTg5NTA4Y2IzYmbgcYOH: 00:25:57.385 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjUzYmJlZWYxNTM5NjRkMjViYTUwNDcyZGY1MGVkYTb6RiE+: ]] 00:25:57.386 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjUzYmJlZWYxNTM5NjRkMjViYTUwNDcyZGY1MGVkYTb6RiE+: 00:25:57.386 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:25:57.386 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:57.386 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:57.386 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:57.386 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:57.386 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:57.386 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:57.386 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.386 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.386 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.386 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:57.386 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:57.386 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:57.386 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:57.386 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:57.386 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:57.386 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:57.386 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:57.386 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:57.386 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:57.386 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:57.386 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:57.386 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.386 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.645 nvme0n1 00:25:57.645 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.645 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:57.645 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.645 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:57.645 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.645 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.645 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:57.645 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:57.645 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.645 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.645 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.645 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:57.645 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:25:57.645 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:57.646 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:57.646 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:57.646 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:57.646 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGJjYzMwZDI3ZWVjN2VjN2IwMjJjYmFmYzA4ZDdiYmVmYWUzZGIxNzQ1ODZjNzlmEwFWNg==: 00:25:57.646 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YThlZGYyNzcyMGFlOTU2ZWE4OTFhNWRlZWJiNTM5NGIi5RZF: 00:25:57.646 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:57.646 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:57.646 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGJjYzMwZDI3ZWVjN2VjN2IwMjJjYmFmYzA4ZDdiYmVmYWUzZGIxNzQ1ODZjNzlmEwFWNg==: 00:25:57.646 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YThlZGYyNzcyMGFlOTU2ZWE4OTFhNWRlZWJiNTM5NGIi5RZF: ]] 00:25:57.646 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YThlZGYyNzcyMGFlOTU2ZWE4OTFhNWRlZWJiNTM5NGIi5RZF: 00:25:57.646 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:25:57.646 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:57.646 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:57.646 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:57.646 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:57.646 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:57.646 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:57.646 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.646 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.646 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.646 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:57.646 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:57.646 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:57.646 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:57.646 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:57.646 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:57.646 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:57.646 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:57.646 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:57.646 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:57.646 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:57.646 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:57.646 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.646 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.905 nvme0n1 00:25:57.905 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.905 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:57.906 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:57.906 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.906 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.906 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.906 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:57.906 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:57.906 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.906 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.906 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.906 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:57.906 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:25:57.906 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:57.906 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:57.906 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:57.906 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:57.906 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDgxMTM5M2U2Y2EyMWEwMWY2Y2JlNzMwOTA3ZjYyOGM0MzM4MWEwNGFmMWRlNDgzMzBlODgyMDkxMDI5NWQyMtYn6Y4=: 00:25:57.906 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:57.906 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:57.906 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:57.906 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDgxMTM5M2U2Y2EyMWEwMWY2Y2JlNzMwOTA3ZjYyOGM0MzM4MWEwNGFmMWRlNDgzMzBlODgyMDkxMDI5NWQyMtYn6Y4=: 00:25:57.906 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:57.906 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:25:57.906 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:57.906 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:57.906 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:57.906 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:57.906 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:57.906 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:57.906 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.906 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.906 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.906 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:57.906 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:57.906 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:57.906 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:57.906 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:57.906 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:57.906 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:57.906 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:57.906 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:57.906 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:57.906 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:57.906 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:57.906 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.906 09:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.165 nvme0n1 00:25:58.165 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.165 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:58.165 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:58.165 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.165 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.165 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.165 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:58.165 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:58.165 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.165 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.165 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.165 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:58.166 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:58.166 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:25:58.166 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:58.166 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:58.166 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:58.166 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:58.166 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzdjZWJiNzdmZWQ0MjdlYjY2ZTQ3NzA3MzlkNzg5OTHjrpjW: 00:25:58.166 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzViMzFmMTAwNTNkNjIwZGRjMjRmNDc0NTdhZDkxZTA5YTgxMzAxYmRmZTFlMDI2NmVmZTE2ZWRmYTFmOGEyN0me1cQ=: 00:25:58.166 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:58.166 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:58.166 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzdjZWJiNzdmZWQ0MjdlYjY2ZTQ3NzA3MzlkNzg5OTHjrpjW: 00:25:58.166 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzViMzFmMTAwNTNkNjIwZGRjMjRmNDc0NTdhZDkxZTA5YTgxMzAxYmRmZTFlMDI2NmVmZTE2ZWRmYTFmOGEyN0me1cQ=: ]] 00:25:58.166 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzViMzFmMTAwNTNkNjIwZGRjMjRmNDc0NTdhZDkxZTA5YTgxMzAxYmRmZTFlMDI2NmVmZTE2ZWRmYTFmOGEyN0me1cQ=: 00:25:58.166 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:25:58.166 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:58.166 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:58.166 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:58.166 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:58.166 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:58.166 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:58.166 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.166 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.166 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.166 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:58.166 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:58.166 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:58.166 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:58.166 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:58.166 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:58.166 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:58.166 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:58.166 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:58.166 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:58.166 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:58.166 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:58.166 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.166 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.425 nvme0n1 00:25:58.425 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.425 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:58.425 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.425 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.425 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:58.425 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.425 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:58.425 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:58.425 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.425 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.425 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.425 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:58.425 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:25:58.425 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:58.425 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:58.425 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:58.425 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:58.425 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTA5ZDQzODlhZTAyZDAyMWI2N2MxNjBjZTNmYjc5ODVjNzI4NWI2ZGYwZGE2MjYxmzzaRg==: 00:25:58.425 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmQ5NGVlYmMyYjMzYWY2NTFjNWQ5Nzc5YTdiNGI0NjIzMjllNTY2YWZjZWYyMWM0y4TMNw==: 00:25:58.425 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:58.425 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:58.425 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTA5ZDQzODlhZTAyZDAyMWI2N2MxNjBjZTNmYjc5ODVjNzI4NWI2ZGYwZGE2MjYxmzzaRg==: 00:25:58.425 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmQ5NGVlYmMyYjMzYWY2NTFjNWQ5Nzc5YTdiNGI0NjIzMjllNTY2YWZjZWYyMWM0y4TMNw==: ]] 00:25:58.425 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmQ5NGVlYmMyYjMzYWY2NTFjNWQ5Nzc5YTdiNGI0NjIzMjllNTY2YWZjZWYyMWM0y4TMNw==: 00:25:58.425 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:25:58.425 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:58.425 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:58.425 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:58.425 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:58.425 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:58.425 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:58.425 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.425 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.425 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.425 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:58.425 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:58.425 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:58.425 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:58.425 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:58.425 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:58.425 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:58.426 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:58.426 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:58.426 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:58.426 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:58.426 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:58.426 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.426 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.686 nvme0n1 00:25:58.686 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.686 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:58.686 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.686 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.686 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:58.686 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.686 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:58.686 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:58.686 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.686 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.686 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.686 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:58.686 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:25:58.686 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:58.686 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:58.686 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:58.686 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:58.686 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDhkODFhYzE5YzIzOTM4Nzk4MmNkYTg5NTA4Y2IzYmbgcYOH: 00:25:58.686 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjUzYmJlZWYxNTM5NjRkMjViYTUwNDcyZGY1MGVkYTb6RiE+: 00:25:58.686 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:58.686 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:58.686 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDhkODFhYzE5YzIzOTM4Nzk4MmNkYTg5NTA4Y2IzYmbgcYOH: 00:25:58.686 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjUzYmJlZWYxNTM5NjRkMjViYTUwNDcyZGY1MGVkYTb6RiE+: ]] 00:25:58.686 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjUzYmJlZWYxNTM5NjRkMjViYTUwNDcyZGY1MGVkYTb6RiE+: 00:25:58.686 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:25:58.686 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:58.686 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:58.686 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:58.686 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:58.686 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:58.686 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:58.686 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.686 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.686 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.686 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:58.686 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:58.686 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:58.686 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:58.686 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:58.686 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:58.686 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:58.686 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:58.686 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:58.686 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:58.686 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:58.686 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:58.686 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.686 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.946 nvme0n1 00:25:58.946 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.946 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:58.946 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.946 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.946 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:58.946 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.946 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:58.946 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:58.946 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.946 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.946 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.946 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:58.946 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:25:58.946 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:58.946 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:58.946 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:58.946 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:58.946 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGJjYzMwZDI3ZWVjN2VjN2IwMjJjYmFmYzA4ZDdiYmVmYWUzZGIxNzQ1ODZjNzlmEwFWNg==: 00:25:58.946 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YThlZGYyNzcyMGFlOTU2ZWE4OTFhNWRlZWJiNTM5NGIi5RZF: 00:25:58.946 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:58.946 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:58.946 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGJjYzMwZDI3ZWVjN2VjN2IwMjJjYmFmYzA4ZDdiYmVmYWUzZGIxNzQ1ODZjNzlmEwFWNg==: 00:25:58.946 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YThlZGYyNzcyMGFlOTU2ZWE4OTFhNWRlZWJiNTM5NGIi5RZF: ]] 00:25:58.946 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YThlZGYyNzcyMGFlOTU2ZWE4OTFhNWRlZWJiNTM5NGIi5RZF: 00:25:58.946 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:25:58.946 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:58.946 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:58.946 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:58.946 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:58.946 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:58.946 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:58.946 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.946 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.946 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.946 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:58.946 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:58.946 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:58.946 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:58.946 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:58.946 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:58.946 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:58.946 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:58.946 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:58.946 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:58.946 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:58.946 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:58.946 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.946 09:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.219 nvme0n1 00:25:59.219 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.219 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:59.219 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.219 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:59.219 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.219 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.219 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:59.219 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:59.219 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.219 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.219 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.219 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:59.219 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:25:59.219 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:59.219 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:59.219 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:59.219 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:59.219 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDgxMTM5M2U2Y2EyMWEwMWY2Y2JlNzMwOTA3ZjYyOGM0MzM4MWEwNGFmMWRlNDgzMzBlODgyMDkxMDI5NWQyMtYn6Y4=: 00:25:59.219 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:59.219 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:59.219 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:59.219 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDgxMTM5M2U2Y2EyMWEwMWY2Y2JlNzMwOTA3ZjYyOGM0MzM4MWEwNGFmMWRlNDgzMzBlODgyMDkxMDI5NWQyMtYn6Y4=: 00:25:59.219 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:59.219 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:25:59.219 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:59.219 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:59.219 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:59.219 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:59.219 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:59.219 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:59.219 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.219 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.219 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.219 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:59.219 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:59.219 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:59.219 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:59.219 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:59.219 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:59.219 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:59.219 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:59.219 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:59.219 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:59.219 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:59.219 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:59.219 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.219 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.489 nvme0n1 00:25:59.489 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.489 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:59.489 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.489 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.489 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:59.489 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.489 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:59.489 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:59.489 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.489 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.489 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.489 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:59.489 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:59.489 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:25:59.489 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:59.489 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:59.489 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:59.489 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:59.489 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzdjZWJiNzdmZWQ0MjdlYjY2ZTQ3NzA3MzlkNzg5OTHjrpjW: 00:25:59.489 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzViMzFmMTAwNTNkNjIwZGRjMjRmNDc0NTdhZDkxZTA5YTgxMzAxYmRmZTFlMDI2NmVmZTE2ZWRmYTFmOGEyN0me1cQ=: 00:25:59.489 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:59.489 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:59.489 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzdjZWJiNzdmZWQ0MjdlYjY2ZTQ3NzA3MzlkNzg5OTHjrpjW: 00:25:59.489 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzViMzFmMTAwNTNkNjIwZGRjMjRmNDc0NTdhZDkxZTA5YTgxMzAxYmRmZTFlMDI2NmVmZTE2ZWRmYTFmOGEyN0me1cQ=: ]] 00:25:59.489 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzViMzFmMTAwNTNkNjIwZGRjMjRmNDc0NTdhZDkxZTA5YTgxMzAxYmRmZTFlMDI2NmVmZTE2ZWRmYTFmOGEyN0me1cQ=: 00:25:59.489 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:25:59.489 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:59.489 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:59.489 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:59.489 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:59.489 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:59.489 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:59.489 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.489 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.489 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.489 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:59.489 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:59.489 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:59.489 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:59.489 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:59.489 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:59.489 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:59.489 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:59.489 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:59.490 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:59.490 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:59.490 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:59.490 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.490 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.748 nvme0n1 00:25:59.748 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.748 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:59.748 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.748 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:59.748 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.007 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.008 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:00.008 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:00.008 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.008 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.008 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.008 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:00.008 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:26:00.008 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:00.008 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:00.008 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:00.008 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:00.008 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTA5ZDQzODlhZTAyZDAyMWI2N2MxNjBjZTNmYjc5ODVjNzI4NWI2ZGYwZGE2MjYxmzzaRg==: 00:26:00.008 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmQ5NGVlYmMyYjMzYWY2NTFjNWQ5Nzc5YTdiNGI0NjIzMjllNTY2YWZjZWYyMWM0y4TMNw==: 00:26:00.008 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:00.008 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:00.008 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTA5ZDQzODlhZTAyZDAyMWI2N2MxNjBjZTNmYjc5ODVjNzI4NWI2ZGYwZGE2MjYxmzzaRg==: 00:26:00.008 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmQ5NGVlYmMyYjMzYWY2NTFjNWQ5Nzc5YTdiNGI0NjIzMjllNTY2YWZjZWYyMWM0y4TMNw==: ]] 00:26:00.008 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmQ5NGVlYmMyYjMzYWY2NTFjNWQ5Nzc5YTdiNGI0NjIzMjllNTY2YWZjZWYyMWM0y4TMNw==: 00:26:00.008 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:26:00.008 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:00.008 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:00.008 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:00.008 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:00.008 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:00.008 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:00.008 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.008 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.008 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.008 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:00.008 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:00.008 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:00.008 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:00.008 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:00.008 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:00.008 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:00.008 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:00.008 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:00.008 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:00.008 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:00.008 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:00.008 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.008 09:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.267 nvme0n1 00:26:00.267 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.267 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:00.267 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.267 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:00.267 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.267 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.267 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:00.267 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:00.267 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.267 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.267 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.267 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:00.267 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:26:00.267 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:00.267 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:00.267 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:00.267 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:00.267 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDhkODFhYzE5YzIzOTM4Nzk4MmNkYTg5NTA4Y2IzYmbgcYOH: 00:26:00.267 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjUzYmJlZWYxNTM5NjRkMjViYTUwNDcyZGY1MGVkYTb6RiE+: 00:26:00.267 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:00.267 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:00.267 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDhkODFhYzE5YzIzOTM4Nzk4MmNkYTg5NTA4Y2IzYmbgcYOH: 00:26:00.267 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjUzYmJlZWYxNTM5NjRkMjViYTUwNDcyZGY1MGVkYTb6RiE+: ]] 00:26:00.267 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjUzYmJlZWYxNTM5NjRkMjViYTUwNDcyZGY1MGVkYTb6RiE+: 00:26:00.267 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:26:00.268 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:00.268 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:00.268 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:00.268 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:00.268 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:00.268 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:00.268 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.268 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.268 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.268 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:00.268 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:00.268 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:00.268 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:00.268 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:00.268 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:00.268 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:00.268 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:00.268 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:00.268 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:00.268 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:00.268 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:00.268 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.268 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.835 nvme0n1 00:26:00.835 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.835 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:00.835 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.835 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:00.835 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.835 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.835 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:00.835 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:00.835 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.835 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.835 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.835 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:00.835 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:26:00.835 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:00.835 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:00.835 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:00.835 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:00.835 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGJjYzMwZDI3ZWVjN2VjN2IwMjJjYmFmYzA4ZDdiYmVmYWUzZGIxNzQ1ODZjNzlmEwFWNg==: 00:26:00.835 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YThlZGYyNzcyMGFlOTU2ZWE4OTFhNWRlZWJiNTM5NGIi5RZF: 00:26:00.835 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:00.835 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:00.835 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGJjYzMwZDI3ZWVjN2VjN2IwMjJjYmFmYzA4ZDdiYmVmYWUzZGIxNzQ1ODZjNzlmEwFWNg==: 00:26:00.835 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YThlZGYyNzcyMGFlOTU2ZWE4OTFhNWRlZWJiNTM5NGIi5RZF: ]] 00:26:00.835 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YThlZGYyNzcyMGFlOTU2ZWE4OTFhNWRlZWJiNTM5NGIi5RZF: 00:26:00.835 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:26:00.835 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:00.835 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:00.835 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:00.835 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:00.835 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:00.835 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:00.835 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.835 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.835 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.835 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:00.835 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:00.835 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:00.835 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:00.835 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:00.835 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:00.835 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:00.835 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:00.835 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:00.835 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:00.835 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:00.835 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:00.835 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.835 09:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.093 nvme0n1 00:26:01.093 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.093 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.093 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:01.093 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.093 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.093 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.093 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.093 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:01.093 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.093 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.351 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.351 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:01.351 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:26:01.351 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:01.351 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:01.351 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:01.351 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:01.351 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDgxMTM5M2U2Y2EyMWEwMWY2Y2JlNzMwOTA3ZjYyOGM0MzM4MWEwNGFmMWRlNDgzMzBlODgyMDkxMDI5NWQyMtYn6Y4=: 00:26:01.351 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:01.351 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:01.351 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:01.351 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDgxMTM5M2U2Y2EyMWEwMWY2Y2JlNzMwOTA3ZjYyOGM0MzM4MWEwNGFmMWRlNDgzMzBlODgyMDkxMDI5NWQyMtYn6Y4=: 00:26:01.351 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:01.351 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:26:01.351 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:01.351 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:01.351 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:01.351 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:01.351 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:01.351 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:01.351 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.352 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.352 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.352 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:01.352 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:01.352 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:01.352 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:01.352 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.352 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.352 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:01.352 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:01.352 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:01.352 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:01.352 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:01.352 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:01.352 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.352 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.610 nvme0n1 00:26:01.610 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.610 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:01.610 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.610 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.610 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.610 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.610 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.610 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:01.610 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.610 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.610 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.610 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:01.610 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:01.610 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:26:01.610 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:01.610 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:01.610 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:01.610 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:01.610 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzdjZWJiNzdmZWQ0MjdlYjY2ZTQ3NzA3MzlkNzg5OTHjrpjW: 00:26:01.610 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzViMzFmMTAwNTNkNjIwZGRjMjRmNDc0NTdhZDkxZTA5YTgxMzAxYmRmZTFlMDI2NmVmZTE2ZWRmYTFmOGEyN0me1cQ=: 00:26:01.610 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:01.610 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:01.610 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzdjZWJiNzdmZWQ0MjdlYjY2ZTQ3NzA3MzlkNzg5OTHjrpjW: 00:26:01.610 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzViMzFmMTAwNTNkNjIwZGRjMjRmNDc0NTdhZDkxZTA5YTgxMzAxYmRmZTFlMDI2NmVmZTE2ZWRmYTFmOGEyN0me1cQ=: ]] 00:26:01.610 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzViMzFmMTAwNTNkNjIwZGRjMjRmNDc0NTdhZDkxZTA5YTgxMzAxYmRmZTFlMDI2NmVmZTE2ZWRmYTFmOGEyN0me1cQ=: 00:26:01.610 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:26:01.610 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:01.610 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:01.610 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:01.610 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:01.611 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:01.611 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:01.611 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.611 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.611 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.611 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:01.611 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:01.611 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:01.611 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:01.611 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.611 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.611 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:01.611 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:01.611 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:01.611 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:01.611 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:01.611 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:01.611 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.611 09:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.177 nvme0n1 00:26:02.177 09:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.177 09:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:02.177 09:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:02.177 09:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.177 09:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.436 09:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.436 09:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:02.436 09:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:02.436 09:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.436 09:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.436 09:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.436 09:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:02.436 09:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:26:02.436 09:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:02.436 09:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:02.436 09:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:02.436 09:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:02.436 09:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTA5ZDQzODlhZTAyZDAyMWI2N2MxNjBjZTNmYjc5ODVjNzI4NWI2ZGYwZGE2MjYxmzzaRg==: 00:26:02.436 09:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmQ5NGVlYmMyYjMzYWY2NTFjNWQ5Nzc5YTdiNGI0NjIzMjllNTY2YWZjZWYyMWM0y4TMNw==: 00:26:02.436 09:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:02.436 09:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:02.436 09:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTA5ZDQzODlhZTAyZDAyMWI2N2MxNjBjZTNmYjc5ODVjNzI4NWI2ZGYwZGE2MjYxmzzaRg==: 00:26:02.436 09:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmQ5NGVlYmMyYjMzYWY2NTFjNWQ5Nzc5YTdiNGI0NjIzMjllNTY2YWZjZWYyMWM0y4TMNw==: ]] 00:26:02.436 09:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmQ5NGVlYmMyYjMzYWY2NTFjNWQ5Nzc5YTdiNGI0NjIzMjllNTY2YWZjZWYyMWM0y4TMNw==: 00:26:02.436 09:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:26:02.436 09:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:02.436 09:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:02.436 09:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:02.436 09:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:02.436 09:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:02.436 09:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:02.436 09:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.436 09:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.436 09:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.436 09:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:02.436 09:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:02.436 09:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:02.436 09:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:02.436 09:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.436 09:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.436 09:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:02.436 09:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:02.436 09:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:02.436 09:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:02.436 09:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:02.436 09:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:02.436 09:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.436 09:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.004 nvme0n1 00:26:03.004 09:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.004 09:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:03.004 09:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:03.004 09:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.004 09:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.004 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.004 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:03.004 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:03.004 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.004 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.004 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.004 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:03.004 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:26:03.004 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:03.004 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:03.004 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:03.004 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:03.004 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDhkODFhYzE5YzIzOTM4Nzk4MmNkYTg5NTA4Y2IzYmbgcYOH: 00:26:03.004 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjUzYmJlZWYxNTM5NjRkMjViYTUwNDcyZGY1MGVkYTb6RiE+: 00:26:03.004 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:03.004 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:03.004 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDhkODFhYzE5YzIzOTM4Nzk4MmNkYTg5NTA4Y2IzYmbgcYOH: 00:26:03.004 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjUzYmJlZWYxNTM5NjRkMjViYTUwNDcyZGY1MGVkYTb6RiE+: ]] 00:26:03.004 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjUzYmJlZWYxNTM5NjRkMjViYTUwNDcyZGY1MGVkYTb6RiE+: 00:26:03.004 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:26:03.004 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:03.004 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:03.004 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:03.004 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:03.005 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:03.005 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:03.005 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.005 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.005 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.005 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:03.005 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:03.005 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:03.005 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:03.005 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:03.005 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:03.005 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:03.005 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:03.005 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:03.005 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:03.005 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:03.005 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:03.005 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.005 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.572 nvme0n1 00:26:03.572 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.572 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:03.572 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.572 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.572 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:03.831 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.831 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:03.831 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:03.831 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.831 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.831 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.831 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:03.831 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:26:03.831 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:03.831 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:03.831 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:03.831 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:03.831 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGJjYzMwZDI3ZWVjN2VjN2IwMjJjYmFmYzA4ZDdiYmVmYWUzZGIxNzQ1ODZjNzlmEwFWNg==: 00:26:03.832 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YThlZGYyNzcyMGFlOTU2ZWE4OTFhNWRlZWJiNTM5NGIi5RZF: 00:26:03.832 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:03.832 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:03.832 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGJjYzMwZDI3ZWVjN2VjN2IwMjJjYmFmYzA4ZDdiYmVmYWUzZGIxNzQ1ODZjNzlmEwFWNg==: 00:26:03.832 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YThlZGYyNzcyMGFlOTU2ZWE4OTFhNWRlZWJiNTM5NGIi5RZF: ]] 00:26:03.832 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YThlZGYyNzcyMGFlOTU2ZWE4OTFhNWRlZWJiNTM5NGIi5RZF: 00:26:03.832 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:26:03.832 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:03.832 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:03.832 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:03.832 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:03.832 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:03.832 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:03.832 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.832 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.832 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.832 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:03.832 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:03.832 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:03.832 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:03.832 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:03.832 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:03.832 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:03.832 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:03.832 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:03.832 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:03.832 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:03.832 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:03.832 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.832 09:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.400 nvme0n1 00:26:04.400 09:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.400 09:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.400 09:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.400 09:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:04.400 09:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.400 09:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.400 09:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.400 09:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:04.400 09:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.400 09:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.400 09:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.400 09:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:04.400 09:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:26:04.400 09:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:04.400 09:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:04.400 09:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:04.400 09:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:04.400 09:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDgxMTM5M2U2Y2EyMWEwMWY2Y2JlNzMwOTA3ZjYyOGM0MzM4MWEwNGFmMWRlNDgzMzBlODgyMDkxMDI5NWQyMtYn6Y4=: 00:26:04.400 09:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:04.400 09:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:04.400 09:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:04.400 09:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDgxMTM5M2U2Y2EyMWEwMWY2Y2JlNzMwOTA3ZjYyOGM0MzM4MWEwNGFmMWRlNDgzMzBlODgyMDkxMDI5NWQyMtYn6Y4=: 00:26:04.400 09:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:04.400 09:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:26:04.400 09:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:04.400 09:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:04.400 09:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:04.400 09:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:04.400 09:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:04.400 09:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:04.400 09:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.400 09:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.400 09:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.400 09:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:04.400 09:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:04.400 09:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:04.400 09:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:04.400 09:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.400 09:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.400 09:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:04.400 09:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:04.400 09:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:04.400 09:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:04.400 09:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:04.400 09:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:04.400 09:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.400 09:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.337 nvme0n1 00:26:05.337 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.337 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:05.337 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:05.337 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.337 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.337 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.337 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.337 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:05.337 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.337 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.337 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.337 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:05.337 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:05.337 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:05.337 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:26:05.337 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:05.337 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:05.337 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:05.337 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:05.337 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzdjZWJiNzdmZWQ0MjdlYjY2ZTQ3NzA3MzlkNzg5OTHjrpjW: 00:26:05.337 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzViMzFmMTAwNTNkNjIwZGRjMjRmNDc0NTdhZDkxZTA5YTgxMzAxYmRmZTFlMDI2NmVmZTE2ZWRmYTFmOGEyN0me1cQ=: 00:26:05.337 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:05.337 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:05.337 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzdjZWJiNzdmZWQ0MjdlYjY2ZTQ3NzA3MzlkNzg5OTHjrpjW: 00:26:05.337 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzViMzFmMTAwNTNkNjIwZGRjMjRmNDc0NTdhZDkxZTA5YTgxMzAxYmRmZTFlMDI2NmVmZTE2ZWRmYTFmOGEyN0me1cQ=: ]] 00:26:05.337 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzViMzFmMTAwNTNkNjIwZGRjMjRmNDc0NTdhZDkxZTA5YTgxMzAxYmRmZTFlMDI2NmVmZTE2ZWRmYTFmOGEyN0me1cQ=: 00:26:05.337 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:26:05.337 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:05.338 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:05.338 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:05.338 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:05.338 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:05.338 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:05.338 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.338 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.338 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.338 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:05.338 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:05.338 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:05.338 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:05.338 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.338 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.338 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:05.338 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:05.338 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:05.338 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:05.338 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:05.338 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:05.338 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.338 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.338 nvme0n1 00:26:05.338 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.338 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:05.338 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.338 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.338 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:05.338 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.338 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.338 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:05.338 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.338 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.338 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.338 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:05.338 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:26:05.338 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:05.338 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:05.338 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:05.338 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:05.338 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTA5ZDQzODlhZTAyZDAyMWI2N2MxNjBjZTNmYjc5ODVjNzI4NWI2ZGYwZGE2MjYxmzzaRg==: 00:26:05.338 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmQ5NGVlYmMyYjMzYWY2NTFjNWQ5Nzc5YTdiNGI0NjIzMjllNTY2YWZjZWYyMWM0y4TMNw==: 00:26:05.338 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:05.338 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:05.338 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTA5ZDQzODlhZTAyZDAyMWI2N2MxNjBjZTNmYjc5ODVjNzI4NWI2ZGYwZGE2MjYxmzzaRg==: 00:26:05.338 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmQ5NGVlYmMyYjMzYWY2NTFjNWQ5Nzc5YTdiNGI0NjIzMjllNTY2YWZjZWYyMWM0y4TMNw==: ]] 00:26:05.338 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmQ5NGVlYmMyYjMzYWY2NTFjNWQ5Nzc5YTdiNGI0NjIzMjllNTY2YWZjZWYyMWM0y4TMNw==: 00:26:05.338 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:26:05.338 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:05.338 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:05.338 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:05.338 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:05.338 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:05.338 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:05.338 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.338 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.338 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.338 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:05.338 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:05.338 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:05.338 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:05.338 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.338 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.338 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:05.338 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:05.338 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:05.338 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:05.338 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:05.338 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:05.338 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.338 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.598 nvme0n1 00:26:05.598 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.598 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:05.598 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.598 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:05.598 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.598 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.598 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.598 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:05.598 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.598 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.598 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.598 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:05.598 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:26:05.598 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:05.598 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:05.598 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:05.598 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:05.598 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDhkODFhYzE5YzIzOTM4Nzk4MmNkYTg5NTA4Y2IzYmbgcYOH: 00:26:05.598 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjUzYmJlZWYxNTM5NjRkMjViYTUwNDcyZGY1MGVkYTb6RiE+: 00:26:05.598 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:05.598 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:05.598 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDhkODFhYzE5YzIzOTM4Nzk4MmNkYTg5NTA4Y2IzYmbgcYOH: 00:26:05.598 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjUzYmJlZWYxNTM5NjRkMjViYTUwNDcyZGY1MGVkYTb6RiE+: ]] 00:26:05.598 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjUzYmJlZWYxNTM5NjRkMjViYTUwNDcyZGY1MGVkYTb6RiE+: 00:26:05.599 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:26:05.599 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:05.599 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:05.599 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:05.599 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:05.599 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:05.599 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:05.599 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.599 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.599 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.599 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:05.599 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:05.599 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:05.599 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:05.599 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.599 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.599 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:05.599 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:05.599 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:05.599 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:05.599 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:05.599 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:05.599 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.599 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.599 nvme0n1 00:26:05.599 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.599 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:05.599 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.599 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:05.599 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.599 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.599 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.599 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:05.599 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.599 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.599 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.599 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:05.599 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:26:05.599 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:05.599 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:05.599 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:05.599 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:05.599 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGJjYzMwZDI3ZWVjN2VjN2IwMjJjYmFmYzA4ZDdiYmVmYWUzZGIxNzQ1ODZjNzlmEwFWNg==: 00:26:05.599 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YThlZGYyNzcyMGFlOTU2ZWE4OTFhNWRlZWJiNTM5NGIi5RZF: 00:26:05.599 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:05.599 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:05.599 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGJjYzMwZDI3ZWVjN2VjN2IwMjJjYmFmYzA4ZDdiYmVmYWUzZGIxNzQ1ODZjNzlmEwFWNg==: 00:26:05.599 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YThlZGYyNzcyMGFlOTU2ZWE4OTFhNWRlZWJiNTM5NGIi5RZF: ]] 00:26:05.599 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YThlZGYyNzcyMGFlOTU2ZWE4OTFhNWRlZWJiNTM5NGIi5RZF: 00:26:05.599 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:26:05.599 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:05.599 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:05.599 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:05.599 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:05.599 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:05.599 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:05.599 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.599 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.859 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.859 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:05.859 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:05.859 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:05.859 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:05.859 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.859 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.859 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:05.859 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:05.859 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:05.859 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:05.859 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:05.859 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:05.859 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.859 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.859 nvme0n1 00:26:05.859 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.859 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:05.859 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:05.859 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.859 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.859 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.859 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.859 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:05.859 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.859 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.859 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.859 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:05.859 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:26:05.859 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:05.859 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:05.859 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:05.859 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:05.859 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDgxMTM5M2U2Y2EyMWEwMWY2Y2JlNzMwOTA3ZjYyOGM0MzM4MWEwNGFmMWRlNDgzMzBlODgyMDkxMDI5NWQyMtYn6Y4=: 00:26:05.859 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:05.859 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:05.859 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:05.859 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDgxMTM5M2U2Y2EyMWEwMWY2Y2JlNzMwOTA3ZjYyOGM0MzM4MWEwNGFmMWRlNDgzMzBlODgyMDkxMDI5NWQyMtYn6Y4=: 00:26:05.859 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:05.859 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:26:05.859 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:05.859 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:05.859 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:05.859 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:05.859 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:05.859 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:05.859 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.859 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.859 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.859 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:05.859 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:05.859 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:05.859 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:05.859 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.859 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.859 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:05.859 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:05.859 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:05.859 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:05.859 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:05.859 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:05.859 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.859 09:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.119 nvme0n1 00:26:06.119 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.119 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.119 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.119 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.119 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:06.119 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.119 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:06.119 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:06.119 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.119 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.119 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.119 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:06.120 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:06.120 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:26:06.120 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:06.120 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:06.120 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:06.120 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:06.120 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzdjZWJiNzdmZWQ0MjdlYjY2ZTQ3NzA3MzlkNzg5OTHjrpjW: 00:26:06.120 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzViMzFmMTAwNTNkNjIwZGRjMjRmNDc0NTdhZDkxZTA5YTgxMzAxYmRmZTFlMDI2NmVmZTE2ZWRmYTFmOGEyN0me1cQ=: 00:26:06.120 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:06.120 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:06.120 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzdjZWJiNzdmZWQ0MjdlYjY2ZTQ3NzA3MzlkNzg5OTHjrpjW: 00:26:06.120 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzViMzFmMTAwNTNkNjIwZGRjMjRmNDc0NTdhZDkxZTA5YTgxMzAxYmRmZTFlMDI2NmVmZTE2ZWRmYTFmOGEyN0me1cQ=: ]] 00:26:06.120 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzViMzFmMTAwNTNkNjIwZGRjMjRmNDc0NTdhZDkxZTA5YTgxMzAxYmRmZTFlMDI2NmVmZTE2ZWRmYTFmOGEyN0me1cQ=: 00:26:06.120 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:26:06.120 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:06.120 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:06.120 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:06.120 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:06.120 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:06.120 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:06.120 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.120 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.120 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.120 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:06.120 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:06.120 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:06.120 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:06.120 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.120 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.120 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:06.120 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:06.120 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:06.120 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:06.120 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:06.120 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:06.120 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.120 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.120 nvme0n1 00:26:06.120 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.120 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.120 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.120 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:06.120 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.380 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.380 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:06.380 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:06.380 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.380 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.380 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.380 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:06.380 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:26:06.380 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:06.380 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:06.380 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:06.380 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:06.380 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTA5ZDQzODlhZTAyZDAyMWI2N2MxNjBjZTNmYjc5ODVjNzI4NWI2ZGYwZGE2MjYxmzzaRg==: 00:26:06.380 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmQ5NGVlYmMyYjMzYWY2NTFjNWQ5Nzc5YTdiNGI0NjIzMjllNTY2YWZjZWYyMWM0y4TMNw==: 00:26:06.380 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:06.380 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:06.380 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTA5ZDQzODlhZTAyZDAyMWI2N2MxNjBjZTNmYjc5ODVjNzI4NWI2ZGYwZGE2MjYxmzzaRg==: 00:26:06.380 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmQ5NGVlYmMyYjMzYWY2NTFjNWQ5Nzc5YTdiNGI0NjIzMjllNTY2YWZjZWYyMWM0y4TMNw==: ]] 00:26:06.380 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmQ5NGVlYmMyYjMzYWY2NTFjNWQ5Nzc5YTdiNGI0NjIzMjllNTY2YWZjZWYyMWM0y4TMNw==: 00:26:06.380 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:26:06.380 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:06.380 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:06.380 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:06.380 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:06.380 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:06.380 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:06.380 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.380 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.380 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.380 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:06.380 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:06.380 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:06.380 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:06.380 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.380 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.380 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:06.380 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:06.380 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:06.380 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:06.380 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:06.380 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:06.380 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.380 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.380 nvme0n1 00:26:06.380 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.380 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.380 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.380 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:06.380 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.380 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.380 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:06.380 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:06.380 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.380 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.380 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.380 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:06.380 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:26:06.380 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:06.380 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:06.380 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:06.380 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:06.380 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDhkODFhYzE5YzIzOTM4Nzk4MmNkYTg5NTA4Y2IzYmbgcYOH: 00:26:06.380 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjUzYmJlZWYxNTM5NjRkMjViYTUwNDcyZGY1MGVkYTb6RiE+: 00:26:06.380 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:06.380 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:06.380 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDhkODFhYzE5YzIzOTM4Nzk4MmNkYTg5NTA4Y2IzYmbgcYOH: 00:26:06.380 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjUzYmJlZWYxNTM5NjRkMjViYTUwNDcyZGY1MGVkYTb6RiE+: ]] 00:26:06.380 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjUzYmJlZWYxNTM5NjRkMjViYTUwNDcyZGY1MGVkYTb6RiE+: 00:26:06.380 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:26:06.380 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:06.380 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:06.380 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:06.380 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:06.640 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:06.640 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:06.640 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.640 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.640 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.640 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:06.640 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:06.640 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:06.640 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:06.640 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.640 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.640 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:06.640 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:06.640 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:06.640 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:06.640 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:06.640 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:06.640 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.640 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.640 nvme0n1 00:26:06.640 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.640 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:06.640 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.640 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.640 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.640 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.640 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:06.640 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:06.640 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.640 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.640 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.640 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:06.640 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:26:06.640 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:06.640 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:06.640 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:06.640 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:06.640 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGJjYzMwZDI3ZWVjN2VjN2IwMjJjYmFmYzA4ZDdiYmVmYWUzZGIxNzQ1ODZjNzlmEwFWNg==: 00:26:06.640 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YThlZGYyNzcyMGFlOTU2ZWE4OTFhNWRlZWJiNTM5NGIi5RZF: 00:26:06.640 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:06.640 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:06.640 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGJjYzMwZDI3ZWVjN2VjN2IwMjJjYmFmYzA4ZDdiYmVmYWUzZGIxNzQ1ODZjNzlmEwFWNg==: 00:26:06.640 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YThlZGYyNzcyMGFlOTU2ZWE4OTFhNWRlZWJiNTM5NGIi5RZF: ]] 00:26:06.640 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YThlZGYyNzcyMGFlOTU2ZWE4OTFhNWRlZWJiNTM5NGIi5RZF: 00:26:06.640 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:26:06.640 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:06.640 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:06.640 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:06.640 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:06.640 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:06.640 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:06.640 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.640 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.640 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.640 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:06.640 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:06.640 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:06.640 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:06.640 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.640 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.640 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:06.640 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:06.640 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:06.640 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:06.640 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:06.640 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:06.640 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.640 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.900 nvme0n1 00:26:06.900 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.900 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.900 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.900 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.900 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:06.900 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.900 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:06.900 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:06.900 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.900 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.900 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.900 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:06.900 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:26:06.900 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:06.900 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:06.900 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:06.900 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:06.900 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDgxMTM5M2U2Y2EyMWEwMWY2Y2JlNzMwOTA3ZjYyOGM0MzM4MWEwNGFmMWRlNDgzMzBlODgyMDkxMDI5NWQyMtYn6Y4=: 00:26:06.900 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:06.900 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:06.900 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:06.900 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDgxMTM5M2U2Y2EyMWEwMWY2Y2JlNzMwOTA3ZjYyOGM0MzM4MWEwNGFmMWRlNDgzMzBlODgyMDkxMDI5NWQyMtYn6Y4=: 00:26:06.900 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:06.900 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:26:06.900 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:06.900 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:06.900 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:06.900 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:06.900 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:06.900 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:06.900 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.900 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.900 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.900 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:06.900 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:06.900 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:06.900 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:06.900 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.900 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.900 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:06.900 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:06.900 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:06.900 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:06.900 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:06.900 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:06.900 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.900 09:08:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.159 nvme0n1 00:26:07.159 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.159 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:07.159 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.159 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.159 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:07.159 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.159 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:07.159 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:07.159 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.159 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.159 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.159 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:07.159 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:07.159 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:26:07.159 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:07.159 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:07.159 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:07.160 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:07.160 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzdjZWJiNzdmZWQ0MjdlYjY2ZTQ3NzA3MzlkNzg5OTHjrpjW: 00:26:07.160 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzViMzFmMTAwNTNkNjIwZGRjMjRmNDc0NTdhZDkxZTA5YTgxMzAxYmRmZTFlMDI2NmVmZTE2ZWRmYTFmOGEyN0me1cQ=: 00:26:07.160 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:07.160 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:07.160 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzdjZWJiNzdmZWQ0MjdlYjY2ZTQ3NzA3MzlkNzg5OTHjrpjW: 00:26:07.160 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzViMzFmMTAwNTNkNjIwZGRjMjRmNDc0NTdhZDkxZTA5YTgxMzAxYmRmZTFlMDI2NmVmZTE2ZWRmYTFmOGEyN0me1cQ=: ]] 00:26:07.160 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzViMzFmMTAwNTNkNjIwZGRjMjRmNDc0NTdhZDkxZTA5YTgxMzAxYmRmZTFlMDI2NmVmZTE2ZWRmYTFmOGEyN0me1cQ=: 00:26:07.160 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:26:07.160 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:07.160 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:07.160 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:07.160 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:07.160 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:07.160 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:07.160 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.160 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.160 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.160 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:07.160 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:07.160 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:07.160 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:07.160 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:07.160 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:07.160 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:07.160 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:07.160 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:07.160 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:07.160 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:07.160 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:07.160 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.160 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.419 nvme0n1 00:26:07.419 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.419 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:07.419 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.419 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.419 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:07.419 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.419 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:07.419 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:07.419 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.419 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.419 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.419 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:07.419 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:26:07.419 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:07.419 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:07.419 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:07.419 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:07.419 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTA5ZDQzODlhZTAyZDAyMWI2N2MxNjBjZTNmYjc5ODVjNzI4NWI2ZGYwZGE2MjYxmzzaRg==: 00:26:07.419 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmQ5NGVlYmMyYjMzYWY2NTFjNWQ5Nzc5YTdiNGI0NjIzMjllNTY2YWZjZWYyMWM0y4TMNw==: 00:26:07.419 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:07.419 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:07.419 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTA5ZDQzODlhZTAyZDAyMWI2N2MxNjBjZTNmYjc5ODVjNzI4NWI2ZGYwZGE2MjYxmzzaRg==: 00:26:07.419 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmQ5NGVlYmMyYjMzYWY2NTFjNWQ5Nzc5YTdiNGI0NjIzMjllNTY2YWZjZWYyMWM0y4TMNw==: ]] 00:26:07.419 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmQ5NGVlYmMyYjMzYWY2NTFjNWQ5Nzc5YTdiNGI0NjIzMjllNTY2YWZjZWYyMWM0y4TMNw==: 00:26:07.419 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:26:07.419 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:07.419 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:07.419 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:07.419 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:07.419 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:07.419 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:07.419 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.419 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.419 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.419 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:07.419 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:07.419 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:07.419 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:07.419 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:07.420 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:07.420 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:07.420 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:07.420 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:07.420 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:07.420 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:07.420 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:07.420 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.420 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.679 nvme0n1 00:26:07.679 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.679 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:07.679 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.679 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.679 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:07.679 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.679 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:07.679 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:07.679 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.679 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.679 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.679 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:07.679 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:26:07.679 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:07.679 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:07.679 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:07.679 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:07.679 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDhkODFhYzE5YzIzOTM4Nzk4MmNkYTg5NTA4Y2IzYmbgcYOH: 00:26:07.679 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjUzYmJlZWYxNTM5NjRkMjViYTUwNDcyZGY1MGVkYTb6RiE+: 00:26:07.679 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:07.679 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:07.679 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDhkODFhYzE5YzIzOTM4Nzk4MmNkYTg5NTA4Y2IzYmbgcYOH: 00:26:07.679 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjUzYmJlZWYxNTM5NjRkMjViYTUwNDcyZGY1MGVkYTb6RiE+: ]] 00:26:07.679 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjUzYmJlZWYxNTM5NjRkMjViYTUwNDcyZGY1MGVkYTb6RiE+: 00:26:07.679 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:26:07.679 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:07.679 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:07.679 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:07.679 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:07.679 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:07.679 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:07.679 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.679 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.679 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.679 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:07.679 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:07.679 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:07.679 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:07.679 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:07.679 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:07.679 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:07.679 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:07.679 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:07.679 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:07.679 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:07.679 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:07.679 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.679 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.939 nvme0n1 00:26:07.939 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.939 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:07.939 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:07.939 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.939 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.939 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.939 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:07.939 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:07.939 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.939 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.939 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.939 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:07.939 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:26:07.939 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:07.939 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:07.939 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:07.939 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:07.939 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGJjYzMwZDI3ZWVjN2VjN2IwMjJjYmFmYzA4ZDdiYmVmYWUzZGIxNzQ1ODZjNzlmEwFWNg==: 00:26:07.939 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YThlZGYyNzcyMGFlOTU2ZWE4OTFhNWRlZWJiNTM5NGIi5RZF: 00:26:07.939 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:07.939 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:07.939 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGJjYzMwZDI3ZWVjN2VjN2IwMjJjYmFmYzA4ZDdiYmVmYWUzZGIxNzQ1ODZjNzlmEwFWNg==: 00:26:07.939 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YThlZGYyNzcyMGFlOTU2ZWE4OTFhNWRlZWJiNTM5NGIi5RZF: ]] 00:26:07.939 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YThlZGYyNzcyMGFlOTU2ZWE4OTFhNWRlZWJiNTM5NGIi5RZF: 00:26:07.939 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:26:07.939 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:07.939 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:07.939 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:07.939 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:07.939 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:07.939 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:07.939 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.939 09:08:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.939 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.939 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:07.939 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:07.939 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:07.939 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:07.939 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:07.939 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:07.939 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:07.939 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:07.939 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:07.939 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:07.939 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:07.939 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:07.939 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.939 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.228 nvme0n1 00:26:08.228 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.228 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:08.228 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.228 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:08.228 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.228 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.228 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:08.228 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:08.228 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.228 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.228 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.228 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:08.228 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:26:08.228 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:08.228 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:08.228 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:08.228 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:08.228 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDgxMTM5M2U2Y2EyMWEwMWY2Y2JlNzMwOTA3ZjYyOGM0MzM4MWEwNGFmMWRlNDgzMzBlODgyMDkxMDI5NWQyMtYn6Y4=: 00:26:08.228 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:08.228 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:08.228 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:08.228 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDgxMTM5M2U2Y2EyMWEwMWY2Y2JlNzMwOTA3ZjYyOGM0MzM4MWEwNGFmMWRlNDgzMzBlODgyMDkxMDI5NWQyMtYn6Y4=: 00:26:08.228 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:08.228 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:26:08.228 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:08.228 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:08.228 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:08.228 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:08.228 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:08.228 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:08.228 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.228 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.228 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.228 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:08.228 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:08.228 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:08.228 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:08.228 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:08.228 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:08.228 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:08.228 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:08.228 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:08.228 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:08.228 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:08.228 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:08.228 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.228 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.487 nvme0n1 00:26:08.487 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.487 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:08.487 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:08.487 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.487 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.487 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.487 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:08.487 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:08.487 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.487 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.487 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.487 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:08.487 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:08.487 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:26:08.487 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:08.488 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:08.488 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:08.488 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:08.488 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzdjZWJiNzdmZWQ0MjdlYjY2ZTQ3NzA3MzlkNzg5OTHjrpjW: 00:26:08.488 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzViMzFmMTAwNTNkNjIwZGRjMjRmNDc0NTdhZDkxZTA5YTgxMzAxYmRmZTFlMDI2NmVmZTE2ZWRmYTFmOGEyN0me1cQ=: 00:26:08.488 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:08.488 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:08.488 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzdjZWJiNzdmZWQ0MjdlYjY2ZTQ3NzA3MzlkNzg5OTHjrpjW: 00:26:08.488 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzViMzFmMTAwNTNkNjIwZGRjMjRmNDc0NTdhZDkxZTA5YTgxMzAxYmRmZTFlMDI2NmVmZTE2ZWRmYTFmOGEyN0me1cQ=: ]] 00:26:08.488 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzViMzFmMTAwNTNkNjIwZGRjMjRmNDc0NTdhZDkxZTA5YTgxMzAxYmRmZTFlMDI2NmVmZTE2ZWRmYTFmOGEyN0me1cQ=: 00:26:08.488 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:26:08.488 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:08.488 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:08.488 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:08.488 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:08.488 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:08.488 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:08.488 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.488 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.488 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.488 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:08.488 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:08.488 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:08.488 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:08.488 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:08.488 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:08.488 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:08.488 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:08.488 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:08.488 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:08.488 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:08.488 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:08.488 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.488 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.055 nvme0n1 00:26:09.055 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.055 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:09.055 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.055 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:09.055 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.055 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.055 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:09.055 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:09.055 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.055 09:08:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.055 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.055 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:09.055 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:26:09.055 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:09.055 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:09.055 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:09.055 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:09.056 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTA5ZDQzODlhZTAyZDAyMWI2N2MxNjBjZTNmYjc5ODVjNzI4NWI2ZGYwZGE2MjYxmzzaRg==: 00:26:09.056 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmQ5NGVlYmMyYjMzYWY2NTFjNWQ5Nzc5YTdiNGI0NjIzMjllNTY2YWZjZWYyMWM0y4TMNw==: 00:26:09.056 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:09.056 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:09.056 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTA5ZDQzODlhZTAyZDAyMWI2N2MxNjBjZTNmYjc5ODVjNzI4NWI2ZGYwZGE2MjYxmzzaRg==: 00:26:09.056 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmQ5NGVlYmMyYjMzYWY2NTFjNWQ5Nzc5YTdiNGI0NjIzMjllNTY2YWZjZWYyMWM0y4TMNw==: ]] 00:26:09.056 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmQ5NGVlYmMyYjMzYWY2NTFjNWQ5Nzc5YTdiNGI0NjIzMjllNTY2YWZjZWYyMWM0y4TMNw==: 00:26:09.056 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:26:09.056 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:09.056 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:09.056 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:09.056 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:09.056 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:09.056 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:09.056 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.056 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.056 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.056 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:09.056 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:09.056 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:09.056 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:09.056 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:09.056 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:09.056 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:09.056 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:09.056 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:09.056 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:09.056 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:09.056 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:09.056 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.056 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.314 nvme0n1 00:26:09.314 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.314 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:09.314 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:09.314 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.314 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.314 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.573 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:09.573 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:09.573 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.573 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.573 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.573 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:09.573 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:26:09.573 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:09.573 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:09.573 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:09.573 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:09.573 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDhkODFhYzE5YzIzOTM4Nzk4MmNkYTg5NTA4Y2IzYmbgcYOH: 00:26:09.573 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjUzYmJlZWYxNTM5NjRkMjViYTUwNDcyZGY1MGVkYTb6RiE+: 00:26:09.573 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:09.573 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:09.573 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDhkODFhYzE5YzIzOTM4Nzk4MmNkYTg5NTA4Y2IzYmbgcYOH: 00:26:09.573 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjUzYmJlZWYxNTM5NjRkMjViYTUwNDcyZGY1MGVkYTb6RiE+: ]] 00:26:09.573 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjUzYmJlZWYxNTM5NjRkMjViYTUwNDcyZGY1MGVkYTb6RiE+: 00:26:09.573 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:26:09.573 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:09.573 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:09.573 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:09.573 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:09.573 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:09.573 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:09.573 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.573 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.573 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.573 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:09.573 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:09.573 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:09.573 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:09.573 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:09.573 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:09.573 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:09.573 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:09.573 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:09.573 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:09.573 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:09.573 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:09.573 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.573 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.832 nvme0n1 00:26:09.832 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.832 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:09.832 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:09.832 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.833 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.833 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.833 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:09.833 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:09.833 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.833 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.833 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.833 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:09.833 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:26:09.833 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:09.833 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:09.833 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:09.833 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:09.833 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGJjYzMwZDI3ZWVjN2VjN2IwMjJjYmFmYzA4ZDdiYmVmYWUzZGIxNzQ1ODZjNzlmEwFWNg==: 00:26:09.833 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YThlZGYyNzcyMGFlOTU2ZWE4OTFhNWRlZWJiNTM5NGIi5RZF: 00:26:09.833 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:09.833 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:09.833 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGJjYzMwZDI3ZWVjN2VjN2IwMjJjYmFmYzA4ZDdiYmVmYWUzZGIxNzQ1ODZjNzlmEwFWNg==: 00:26:09.833 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YThlZGYyNzcyMGFlOTU2ZWE4OTFhNWRlZWJiNTM5NGIi5RZF: ]] 00:26:09.833 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YThlZGYyNzcyMGFlOTU2ZWE4OTFhNWRlZWJiNTM5NGIi5RZF: 00:26:09.833 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:26:09.833 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:09.833 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:09.833 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:09.833 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:09.833 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:09.833 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:09.833 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.833 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.833 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.833 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:09.833 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:09.833 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:09.833 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:09.833 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:09.833 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:09.833 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:09.833 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:09.833 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:09.833 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:09.833 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:09.833 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:09.833 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.833 09:08:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.402 nvme0n1 00:26:10.402 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.402 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.402 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.402 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.402 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:10.402 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.402 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.402 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:10.402 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.402 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.402 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.402 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:10.402 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:26:10.402 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:10.402 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:10.402 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:10.402 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:10.402 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDgxMTM5M2U2Y2EyMWEwMWY2Y2JlNzMwOTA3ZjYyOGM0MzM4MWEwNGFmMWRlNDgzMzBlODgyMDkxMDI5NWQyMtYn6Y4=: 00:26:10.402 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:10.402 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:10.402 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:10.402 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDgxMTM5M2U2Y2EyMWEwMWY2Y2JlNzMwOTA3ZjYyOGM0MzM4MWEwNGFmMWRlNDgzMzBlODgyMDkxMDI5NWQyMtYn6Y4=: 00:26:10.402 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:10.402 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:26:10.402 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:10.402 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:10.402 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:10.402 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:10.402 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:10.402 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:10.402 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.402 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.402 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.402 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:10.402 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:10.402 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:10.402 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:10.402 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.402 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.402 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:10.402 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:10.402 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:10.402 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:10.402 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:10.402 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:10.402 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.402 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.662 nvme0n1 00:26:10.662 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.662 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.662 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:10.662 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.662 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.662 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.662 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.662 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:10.662 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.662 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.922 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.922 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:10.922 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:10.922 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:26:10.922 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:10.922 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:10.922 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:10.922 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:10.922 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzdjZWJiNzdmZWQ0MjdlYjY2ZTQ3NzA3MzlkNzg5OTHjrpjW: 00:26:10.922 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzViMzFmMTAwNTNkNjIwZGRjMjRmNDc0NTdhZDkxZTA5YTgxMzAxYmRmZTFlMDI2NmVmZTE2ZWRmYTFmOGEyN0me1cQ=: 00:26:10.922 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:10.922 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:10.922 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzdjZWJiNzdmZWQ0MjdlYjY2ZTQ3NzA3MzlkNzg5OTHjrpjW: 00:26:10.922 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzViMzFmMTAwNTNkNjIwZGRjMjRmNDc0NTdhZDkxZTA5YTgxMzAxYmRmZTFlMDI2NmVmZTE2ZWRmYTFmOGEyN0me1cQ=: ]] 00:26:10.922 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzViMzFmMTAwNTNkNjIwZGRjMjRmNDc0NTdhZDkxZTA5YTgxMzAxYmRmZTFlMDI2NmVmZTE2ZWRmYTFmOGEyN0me1cQ=: 00:26:10.922 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:26:10.922 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:10.922 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:10.922 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:10.922 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:10.922 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:10.922 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:10.922 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.922 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.922 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.922 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:10.922 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:10.922 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:10.922 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:10.922 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.922 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.922 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:10.922 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:10.922 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:10.922 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:10.922 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:10.922 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:10.922 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.922 09:08:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.490 nvme0n1 00:26:11.490 09:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.490 09:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:11.490 09:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.490 09:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:11.490 09:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.490 09:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.490 09:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:11.490 09:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:11.490 09:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.490 09:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.490 09:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.490 09:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:11.490 09:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:26:11.490 09:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:11.490 09:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:11.490 09:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:11.490 09:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:11.490 09:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTA5ZDQzODlhZTAyZDAyMWI2N2MxNjBjZTNmYjc5ODVjNzI4NWI2ZGYwZGE2MjYxmzzaRg==: 00:26:11.490 09:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmQ5NGVlYmMyYjMzYWY2NTFjNWQ5Nzc5YTdiNGI0NjIzMjllNTY2YWZjZWYyMWM0y4TMNw==: 00:26:11.490 09:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:11.490 09:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:11.490 09:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTA5ZDQzODlhZTAyZDAyMWI2N2MxNjBjZTNmYjc5ODVjNzI4NWI2ZGYwZGE2MjYxmzzaRg==: 00:26:11.490 09:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmQ5NGVlYmMyYjMzYWY2NTFjNWQ5Nzc5YTdiNGI0NjIzMjllNTY2YWZjZWYyMWM0y4TMNw==: ]] 00:26:11.490 09:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmQ5NGVlYmMyYjMzYWY2NTFjNWQ5Nzc5YTdiNGI0NjIzMjllNTY2YWZjZWYyMWM0y4TMNw==: 00:26:11.490 09:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:26:11.490 09:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:11.490 09:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:11.490 09:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:11.490 09:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:11.490 09:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:11.490 09:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:11.490 09:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.490 09:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.490 09:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.490 09:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:11.490 09:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:11.490 09:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:11.490 09:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:11.490 09:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.490 09:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.490 09:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:11.490 09:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:11.490 09:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:11.490 09:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:11.490 09:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:11.490 09:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:11.490 09:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.490 09:08:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.058 nvme0n1 00:26:12.058 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.058 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:12.058 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:12.058 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.058 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.058 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.058 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.058 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:12.058 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.058 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.058 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.058 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:12.058 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:26:12.058 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:12.058 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:12.058 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:12.058 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:12.058 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDhkODFhYzE5YzIzOTM4Nzk4MmNkYTg5NTA4Y2IzYmbgcYOH: 00:26:12.058 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjUzYmJlZWYxNTM5NjRkMjViYTUwNDcyZGY1MGVkYTb6RiE+: 00:26:12.058 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:12.058 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:12.058 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDhkODFhYzE5YzIzOTM4Nzk4MmNkYTg5NTA4Y2IzYmbgcYOH: 00:26:12.058 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjUzYmJlZWYxNTM5NjRkMjViYTUwNDcyZGY1MGVkYTb6RiE+: ]] 00:26:12.058 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjUzYmJlZWYxNTM5NjRkMjViYTUwNDcyZGY1MGVkYTb6RiE+: 00:26:12.058 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:26:12.058 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:12.058 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:12.058 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:12.058 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:12.058 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:12.058 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:12.058 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.058 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.058 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.058 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:12.058 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:12.058 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:12.058 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:12.058 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:12.058 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:12.058 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:12.058 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:12.058 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:12.058 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:12.058 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:12.058 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:12.058 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.058 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.995 nvme0n1 00:26:12.995 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.995 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:12.995 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.995 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.995 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:12.995 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.995 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.995 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:12.995 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.995 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.995 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.995 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:12.995 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:26:12.995 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:12.995 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:12.995 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:12.995 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:12.995 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGJjYzMwZDI3ZWVjN2VjN2IwMjJjYmFmYzA4ZDdiYmVmYWUzZGIxNzQ1ODZjNzlmEwFWNg==: 00:26:12.995 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YThlZGYyNzcyMGFlOTU2ZWE4OTFhNWRlZWJiNTM5NGIi5RZF: 00:26:12.995 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:12.995 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:12.995 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGJjYzMwZDI3ZWVjN2VjN2IwMjJjYmFmYzA4ZDdiYmVmYWUzZGIxNzQ1ODZjNzlmEwFWNg==: 00:26:12.995 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YThlZGYyNzcyMGFlOTU2ZWE4OTFhNWRlZWJiNTM5NGIi5RZF: ]] 00:26:12.995 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YThlZGYyNzcyMGFlOTU2ZWE4OTFhNWRlZWJiNTM5NGIi5RZF: 00:26:12.995 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:26:12.995 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:12.995 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:12.995 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:12.995 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:12.995 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:12.995 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:12.995 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.995 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.995 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.995 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:12.995 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:12.995 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:12.995 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:12.995 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:12.995 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:12.995 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:12.995 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:12.995 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:12.995 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:12.995 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:12.995 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:12.995 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.995 09:08:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.562 nvme0n1 00:26:13.562 09:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.562 09:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:13.562 09:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.562 09:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:13.562 09:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.562 09:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.562 09:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:13.562 09:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:13.562 09:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.562 09:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.562 09:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.562 09:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:13.562 09:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:26:13.562 09:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:13.562 09:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:13.562 09:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:13.562 09:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:13.562 09:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDgxMTM5M2U2Y2EyMWEwMWY2Y2JlNzMwOTA3ZjYyOGM0MzM4MWEwNGFmMWRlNDgzMzBlODgyMDkxMDI5NWQyMtYn6Y4=: 00:26:13.562 09:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:13.562 09:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:13.562 09:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:13.562 09:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDgxMTM5M2U2Y2EyMWEwMWY2Y2JlNzMwOTA3ZjYyOGM0MzM4MWEwNGFmMWRlNDgzMzBlODgyMDkxMDI5NWQyMtYn6Y4=: 00:26:13.562 09:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:13.562 09:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:26:13.562 09:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:13.562 09:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:13.562 09:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:13.562 09:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:13.562 09:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:13.562 09:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:13.562 09:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.562 09:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.562 09:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.562 09:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:13.562 09:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:13.562 09:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:13.562 09:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:13.562 09:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:13.562 09:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:13.562 09:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:13.562 09:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:13.562 09:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:13.562 09:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:13.562 09:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:13.562 09:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:13.562 09:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.562 09:08:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.130 nvme0n1 00:26:14.130 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.130 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.130 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:14.130 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.130 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.130 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.130 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.130 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.130 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.130 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.130 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.130 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:14.130 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.130 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:14.130 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:14.130 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:14.130 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTA5ZDQzODlhZTAyZDAyMWI2N2MxNjBjZTNmYjc5ODVjNzI4NWI2ZGYwZGE2MjYxmzzaRg==: 00:26:14.130 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmQ5NGVlYmMyYjMzYWY2NTFjNWQ5Nzc5YTdiNGI0NjIzMjllNTY2YWZjZWYyMWM0y4TMNw==: 00:26:14.130 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:14.130 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:14.130 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTA5ZDQzODlhZTAyZDAyMWI2N2MxNjBjZTNmYjc5ODVjNzI4NWI2ZGYwZGE2MjYxmzzaRg==: 00:26:14.130 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmQ5NGVlYmMyYjMzYWY2NTFjNWQ5Nzc5YTdiNGI0NjIzMjllNTY2YWZjZWYyMWM0y4TMNw==: ]] 00:26:14.130 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmQ5NGVlYmMyYjMzYWY2NTFjNWQ5Nzc5YTdiNGI0NjIzMjllNTY2YWZjZWYyMWM0y4TMNw==: 00:26:14.130 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:14.130 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.130 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.130 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.130 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:26:14.130 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:14.130 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:14.130 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:14.130 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.130 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.130 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:14.130 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:14.130 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:14.130 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:14.130 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:14.130 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:14.130 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:26:14.130 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:14.130 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:14.130 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:14.130 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:14.389 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:14.389 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:14.389 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.389 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.389 request: 00:26:14.389 { 00:26:14.389 "name": "nvme0", 00:26:14.389 "trtype": "tcp", 00:26:14.389 "traddr": "10.0.0.1", 00:26:14.389 "adrfam": "ipv4", 00:26:14.389 "trsvcid": "4420", 00:26:14.389 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:14.389 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:14.389 "prchk_reftag": false, 00:26:14.389 "prchk_guard": false, 00:26:14.389 "hdgst": false, 00:26:14.389 "ddgst": false, 00:26:14.389 "method": "bdev_nvme_attach_controller", 00:26:14.389 "req_id": 1 00:26:14.389 } 00:26:14.389 Got JSON-RPC error response 00:26:14.389 response: 00:26:14.389 { 00:26:14.389 "code": -5, 00:26:14.389 "message": "Input/output error" 00:26:14.389 } 00:26:14.389 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:14.389 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:26:14.389 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:14.389 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:14.389 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:14.389 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.389 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:26:14.389 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.390 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.390 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.390 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:26:14.390 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:26:14.390 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:14.390 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:14.390 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:14.390 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.390 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.390 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:14.390 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:14.390 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:14.390 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:14.390 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:14.390 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:14.390 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:26:14.390 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:14.390 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:14.390 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:14.390 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:14.390 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:14.390 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:14.390 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.390 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.390 request: 00:26:14.390 { 00:26:14.390 "name": "nvme0", 00:26:14.390 "trtype": "tcp", 00:26:14.390 "traddr": "10.0.0.1", 00:26:14.390 "adrfam": "ipv4", 00:26:14.390 "trsvcid": "4420", 00:26:14.390 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:14.390 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:14.390 "prchk_reftag": false, 00:26:14.390 "prchk_guard": false, 00:26:14.390 "hdgst": false, 00:26:14.390 "ddgst": false, 00:26:14.390 "dhchap_key": "key2", 00:26:14.390 "method": "bdev_nvme_attach_controller", 00:26:14.390 "req_id": 1 00:26:14.390 } 00:26:14.390 Got JSON-RPC error response 00:26:14.390 response: 00:26:14.390 { 00:26:14.390 "code": -5, 00:26:14.390 "message": "Input/output error" 00:26:14.390 } 00:26:14.390 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:14.390 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:26:14.390 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:14.390 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:14.390 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:14.390 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.390 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:26:14.390 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.390 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.390 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.390 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:26:14.390 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:26:14.390 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:14.390 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:14.390 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:14.390 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.390 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.390 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:14.390 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:14.390 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:14.390 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:14.390 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:14.390 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:14.390 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:26:14.390 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:14.390 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:14.390 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:14.390 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:14.390 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:14.390 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:14.390 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.390 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.390 request: 00:26:14.390 { 00:26:14.390 "name": "nvme0", 00:26:14.390 "trtype": "tcp", 00:26:14.390 "traddr": "10.0.0.1", 00:26:14.390 "adrfam": "ipv4", 00:26:14.390 "trsvcid": "4420", 00:26:14.390 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:14.390 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:14.390 "prchk_reftag": false, 00:26:14.390 "prchk_guard": false, 00:26:14.390 "hdgst": false, 00:26:14.390 "ddgst": false, 00:26:14.390 "dhchap_key": "key1", 00:26:14.390 "dhchap_ctrlr_key": "ckey2", 00:26:14.390 "method": "bdev_nvme_attach_controller", 00:26:14.390 "req_id": 1 00:26:14.390 } 00:26:14.390 Got JSON-RPC error response 00:26:14.390 response: 00:26:14.390 { 00:26:14.390 "code": -5, 00:26:14.390 "message": "Input/output error" 00:26:14.390 } 00:26:14.390 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:14.390 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:26:14.390 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:14.390 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:14.390 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:14.390 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:26:14.390 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:26:14.390 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:26:14.390 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:14.390 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:26:14.390 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:14.390 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:26:14.390 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:14.390 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:14.390 rmmod nvme_tcp 00:26:14.390 rmmod nvme_fabrics 00:26:14.649 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:14.649 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:26:14.649 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:26:14.649 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 84250 ']' 00:26:14.649 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 84250 00:26:14.649 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 84250 ']' 00:26:14.649 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 84250 00:26:14.649 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:26:14.649 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:14.649 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84250 00:26:14.649 killing process with pid 84250 00:26:14.649 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:14.649 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:14.649 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84250' 00:26:14.649 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 84250 00:26:14.649 09:08:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 84250 00:26:16.027 09:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:16.027 09:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:16.027 09:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:16.027 09:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:16.027 09:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:16.027 09:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:16.027 09:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:16.027 09:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:16.027 09:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:26:16.027 09:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:16.027 09:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:16.027 09:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:26:16.027 09:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:26:16.027 09:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:26:16.027 09:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:16.027 09:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:16.027 09:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:16.027 09:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:16.027 09:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:26:16.027 09:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:26:16.027 09:08:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:26:16.617 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:16.617 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:26:16.617 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:26:16.617 09:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.swo /tmp/spdk.key-null.aQL /tmp/spdk.key-sha256.BVS /tmp/spdk.key-sha384.vnm /tmp/spdk.key-sha512.i4s /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:26:16.617 09:08:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:26:17.183 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:17.183 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:26:17.183 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:26:17.183 ************************************ 00:26:17.183 END TEST nvmf_auth_host 00:26:17.183 ************************************ 00:26:17.183 00:26:17.183 real 0m36.841s 00:26:17.183 user 0m32.619s 00:26:17.183 sys 0m3.982s 00:26:17.183 09:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:17.183 09:08:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.183 09:08:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:26:17.183 09:08:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:17.183 09:08:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:17.183 09:08:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:17.183 09:08:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.183 ************************************ 00:26:17.183 START TEST nvmf_digest 00:26:17.183 ************************************ 00:26:17.183 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:17.183 * Looking for test storage... 00:26:17.183 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:17.183 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:17.183 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:26:17.183 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:17.183 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:17.183 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:17.183 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:17.183 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:17.183 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:17.183 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:17.183 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:17.183 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:17.183 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:17.183 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:26:17.183 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:26:17.183 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:17.183 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:17.183 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:17.183 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:17.184 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:17.184 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:17.184 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:17.184 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:17.184 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:17.184 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:17.184 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:17.184 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:26:17.184 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:17.184 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:26:17.184 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:17.184 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:17.184 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:17.184 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:17.184 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:17.184 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:17.184 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:17.184 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:17.184 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:26:17.184 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:26:17.184 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:26:17.184 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:26:17.184 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:26:17.184 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:17.184 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:17.184 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:17.184 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:17.184 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:17.184 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:17.184 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:17.184 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:17.184 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:26:17.184 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:26:17.184 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:26:17.184 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:26:17.184 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:26:17.184 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # nvmf_veth_init 00:26:17.184 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:17.184 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:17.184 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:17.184 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:26:17.184 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:17.184 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:17.184 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:17.184 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:17.184 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:17.184 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:17.184 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:17.184 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:17.184 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:26:17.184 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:26:17.443 Cannot find device "nvmf_tgt_br" 00:26:17.443 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # true 00:26:17.443 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:26:17.443 Cannot find device "nvmf_tgt_br2" 00:26:17.443 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # true 00:26:17.443 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:26:17.443 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:26:17.443 Cannot find device "nvmf_tgt_br" 00:26:17.443 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # true 00:26:17.443 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:26:17.443 Cannot find device "nvmf_tgt_br2" 00:26:17.443 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # true 00:26:17.443 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:26:17.443 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:26:17.443 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:17.443 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:17.443 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:26:17.443 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:17.443 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:17.443 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:26:17.443 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:26:17.443 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:17.443 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:17.443 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:17.443 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:17.443 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:17.443 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:17.443 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:17.443 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:17.443 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:26:17.443 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:26:17.443 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:26:17.443 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:26:17.443 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:17.443 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:17.443 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:17.443 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:26:17.443 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:26:17.443 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:26:17.443 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:17.443 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:17.748 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:17.748 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:17.748 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:26:17.748 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:17.748 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:26:17.748 00:26:17.748 --- 10.0.0.2 ping statistics --- 00:26:17.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:17.748 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:26:17.748 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:26:17.748 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:17.748 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:26:17.748 00:26:17.748 --- 10.0.0.3 ping statistics --- 00:26:17.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:17.748 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:26:17.748 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:17.748 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:17.748 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:26:17.748 00:26:17.748 --- 10.0.0.1 ping statistics --- 00:26:17.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:17.748 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:26:17.748 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:17.748 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@433 -- # return 0 00:26:17.748 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:17.748 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:17.748 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:17.748 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:17.748 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:17.748 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:17.748 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:17.748 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:17.748 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:26:17.748 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:26:17.748 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:17.748 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:17.748 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:17.748 ************************************ 00:26:17.748 START TEST nvmf_digest_clean 00:26:17.748 ************************************ 00:26:17.748 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:26:17.748 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:26:17.748 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:26:17.748 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:26:17.748 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:26:17.748 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:26:17.748 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:17.748 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:17.748 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:17.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:17.748 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=85836 00:26:17.748 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 85836 00:26:17.748 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:17.748 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 85836 ']' 00:26:17.748 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:17.748 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:17.748 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:17.748 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:17.748 09:08:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:17.748 [2024-07-25 09:08:24.727286] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:26:17.748 [2024-07-25 09:08:24.727455] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:18.006 [2024-07-25 09:08:24.908549] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:18.264 [2024-07-25 09:08:25.144857] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:18.265 [2024-07-25 09:08:25.145235] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:18.265 [2024-07-25 09:08:25.145281] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:18.265 [2024-07-25 09:08:25.145313] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:18.265 [2024-07-25 09:08:25.145336] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:18.265 [2024-07-25 09:08:25.145391] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:18.831 09:08:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:18.831 09:08:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:26:18.831 09:08:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:18.831 09:08:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:18.831 09:08:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:18.831 09:08:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:18.831 09:08:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:26:18.831 09:08:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:26:18.831 09:08:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:26:18.831 09:08:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.831 09:08:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:18.831 [2024-07-25 09:08:25.940310] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:26:19.090 null0 00:26:19.090 [2024-07-25 09:08:26.066090] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:19.090 [2024-07-25 09:08:26.090257] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:19.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:19.090 09:08:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.090 09:08:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:26:19.090 09:08:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:19.090 09:08:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:19.090 09:08:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:19.090 09:08:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:19.090 09:08:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:19.090 09:08:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:19.090 09:08:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=85868 00:26:19.090 09:08:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 85868 /var/tmp/bperf.sock 00:26:19.090 09:08:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 85868 ']' 00:26:19.090 09:08:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:19.090 09:08:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:19.090 09:08:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:19.090 09:08:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:19.090 09:08:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:19.090 09:08:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:19.090 [2024-07-25 09:08:26.203262] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:26:19.091 [2024-07-25 09:08:26.203941] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85868 ] 00:26:19.348 [2024-07-25 09:08:26.379709] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:19.607 [2024-07-25 09:08:26.618422] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:20.214 09:08:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:20.214 09:08:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:26:20.214 09:08:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:20.214 09:08:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:20.214 09:08:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:20.473 [2024-07-25 09:08:27.582435] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:26:20.732 09:08:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:20.732 09:08:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:20.990 nvme0n1 00:26:20.991 09:08:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:20.991 09:08:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:21.249 Running I/O for 2 seconds... 00:26:23.149 00:26:23.149 Latency(us) 00:26:23.149 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:23.149 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:23.149 nvme0n1 : 2.00 11350.33 44.34 0.00 0.00 11267.48 10247.45 28359.21 00:26:23.149 =================================================================================================================== 00:26:23.149 Total : 11350.33 44.34 0.00 0.00 11267.48 10247.45 28359.21 00:26:23.149 0 00:26:23.149 09:08:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:23.149 09:08:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:23.149 09:08:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:23.149 09:08:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:23.149 | select(.opcode=="crc32c") 00:26:23.149 | "\(.module_name) \(.executed)"' 00:26:23.149 09:08:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:23.407 09:08:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:23.407 09:08:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:23.407 09:08:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:23.407 09:08:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:23.407 09:08:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 85868 00:26:23.407 09:08:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 85868 ']' 00:26:23.407 09:08:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 85868 00:26:23.407 09:08:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:26:23.407 09:08:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:23.407 09:08:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85868 00:26:23.407 killing process with pid 85868 00:26:23.407 Received shutdown signal, test time was about 2.000000 seconds 00:26:23.407 00:26:23.407 Latency(us) 00:26:23.407 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:23.407 =================================================================================================================== 00:26:23.407 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:23.407 09:08:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:23.407 09:08:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:23.407 09:08:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85868' 00:26:23.407 09:08:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 85868 00:26:23.407 09:08:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 85868 00:26:24.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:24.805 09:08:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:26:24.805 09:08:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:24.805 09:08:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:24.805 09:08:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:24.805 09:08:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:24.805 09:08:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:24.805 09:08:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:24.805 09:08:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=85942 00:26:24.805 09:08:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:24.805 09:08:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 85942 /var/tmp/bperf.sock 00:26:24.806 09:08:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 85942 ']' 00:26:24.806 09:08:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:24.806 09:08:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:24.806 09:08:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:24.806 09:08:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:24.806 09:08:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:24.806 [2024-07-25 09:08:31.714485] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:26:24.806 [2024-07-25 09:08:31.714928] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85942 ] 00:26:24.806 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:24.806 Zero copy mechanism will not be used. 00:26:24.806 [2024-07-25 09:08:31.883554] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:25.065 [2024-07-25 09:08:32.148349] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:26.000 09:08:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:26.000 09:08:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:26:26.000 09:08:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:26.000 09:08:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:26.000 09:08:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:26.259 [2024-07-25 09:08:33.277444] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:26:26.517 09:08:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:26.517 09:08:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:26.775 nvme0n1 00:26:26.775 09:08:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:26.775 09:08:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:26.775 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:26.775 Zero copy mechanism will not be used. 00:26:26.775 Running I/O for 2 seconds... 00:26:29.309 00:26:29.309 Latency(us) 00:26:29.309 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:29.309 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:29.309 nvme0n1 : 2.00 5945.60 743.20 0.00 0.00 2686.85 2472.49 11915.64 00:26:29.309 =================================================================================================================== 00:26:29.309 Total : 5945.60 743.20 0.00 0.00 2686.85 2472.49 11915.64 00:26:29.309 0 00:26:29.309 09:08:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:29.309 09:08:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:29.309 09:08:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:29.309 09:08:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:29.309 09:08:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:29.309 | select(.opcode=="crc32c") 00:26:29.309 | "\(.module_name) \(.executed)"' 00:26:29.309 09:08:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:29.309 09:08:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:29.309 09:08:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:29.309 09:08:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:29.309 09:08:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 85942 00:26:29.309 09:08:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 85942 ']' 00:26:29.309 09:08:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 85942 00:26:29.309 09:08:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:26:29.309 09:08:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:29.309 09:08:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85942 00:26:29.309 killing process with pid 85942 00:26:29.309 Received shutdown signal, test time was about 2.000000 seconds 00:26:29.309 00:26:29.309 Latency(us) 00:26:29.309 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:29.309 =================================================================================================================== 00:26:29.309 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:29.309 09:08:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:29.309 09:08:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:29.309 09:08:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85942' 00:26:29.309 09:08:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 85942 00:26:29.309 09:08:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 85942 00:26:30.244 09:08:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:26:30.244 09:08:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:30.244 09:08:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:30.244 09:08:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:30.244 09:08:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:30.244 09:08:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:30.244 09:08:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:30.244 09:08:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=86013 00:26:30.244 09:08:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 86013 /var/tmp/bperf.sock 00:26:30.244 09:08:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:30.244 09:08:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 86013 ']' 00:26:30.244 09:08:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:30.244 09:08:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:30.244 09:08:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:30.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:30.244 09:08:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:30.244 09:08:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:30.502 [2024-07-25 09:08:37.400298] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:26:30.502 [2024-07-25 09:08:37.400498] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86013 ] 00:26:30.502 [2024-07-25 09:08:37.574535] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:30.761 [2024-07-25 09:08:37.804527] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:31.326 09:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:31.326 09:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:26:31.326 09:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:31.326 09:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:31.326 09:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:31.584 [2024-07-25 09:08:38.669103] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:26:31.842 09:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:31.842 09:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:32.100 nvme0n1 00:26:32.100 09:08:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:32.100 09:08:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:32.358 Running I/O for 2 seconds... 00:26:34.301 00:26:34.301 Latency(us) 00:26:34.301 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:34.301 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:34.301 nvme0n1 : 2.01 13612.17 53.17 0.00 0.00 9391.88 3813.00 35746.91 00:26:34.301 =================================================================================================================== 00:26:34.301 Total : 13612.17 53.17 0.00 0.00 9391.88 3813.00 35746.91 00:26:34.301 0 00:26:34.301 09:08:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:34.301 09:08:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:34.301 09:08:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:34.301 09:08:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:34.301 09:08:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:34.301 | select(.opcode=="crc32c") 00:26:34.301 | "\(.module_name) \(.executed)"' 00:26:34.559 09:08:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:34.559 09:08:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:34.559 09:08:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:34.559 09:08:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:34.559 09:08:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 86013 00:26:34.559 09:08:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 86013 ']' 00:26:34.559 09:08:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 86013 00:26:34.559 09:08:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:26:34.559 09:08:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:34.559 09:08:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86013 00:26:34.559 killing process with pid 86013 00:26:34.559 Received shutdown signal, test time was about 2.000000 seconds 00:26:34.559 00:26:34.559 Latency(us) 00:26:34.559 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:34.559 =================================================================================================================== 00:26:34.559 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:34.559 09:08:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:34.559 09:08:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:34.559 09:08:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86013' 00:26:34.559 09:08:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 86013 00:26:34.559 09:08:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 86013 00:26:35.934 09:08:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:26:35.934 09:08:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:35.934 09:08:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:35.934 09:08:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:35.934 09:08:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:35.934 09:08:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:35.934 09:08:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:35.934 09:08:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:35.934 09:08:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=86086 00:26:35.934 09:08:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 86086 /var/tmp/bperf.sock 00:26:35.934 09:08:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 86086 ']' 00:26:35.934 09:08:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:35.934 09:08:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:35.934 09:08:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:35.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:35.934 09:08:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:35.934 09:08:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:35.934 [2024-07-25 09:08:42.830204] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:26:35.935 [2024-07-25 09:08:42.830359] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86086 ] 00:26:35.935 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:35.935 Zero copy mechanism will not be used. 00:26:35.935 [2024-07-25 09:08:42.996351] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:36.193 [2024-07-25 09:08:43.232325] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:36.760 09:08:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:36.760 09:08:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:26:36.760 09:08:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:36.760 09:08:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:36.760 09:08:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:37.326 [2024-07-25 09:08:44.131988] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:26:37.327 09:08:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:37.327 09:08:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:37.585 nvme0n1 00:26:37.585 09:08:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:37.585 09:08:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:37.585 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:37.585 Zero copy mechanism will not be used. 00:26:37.585 Running I/O for 2 seconds... 00:26:40.115 00:26:40.115 Latency(us) 00:26:40.115 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:40.115 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:40.115 nvme0n1 : 2.00 4682.78 585.35 0.00 0.00 3407.67 2457.60 10604.92 00:26:40.115 =================================================================================================================== 00:26:40.115 Total : 4682.78 585.35 0.00 0.00 3407.67 2457.60 10604.92 00:26:40.115 0 00:26:40.115 09:08:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:40.115 09:08:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:40.115 09:08:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:40.115 09:08:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:40.115 09:08:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:40.115 | select(.opcode=="crc32c") 00:26:40.115 | "\(.module_name) \(.executed)"' 00:26:40.115 09:08:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:40.115 09:08:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:40.115 09:08:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:40.115 09:08:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:40.115 09:08:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 86086 00:26:40.115 09:08:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 86086 ']' 00:26:40.115 09:08:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 86086 00:26:40.115 09:08:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:26:40.115 09:08:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:40.115 09:08:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86086 00:26:40.116 killing process with pid 86086 00:26:40.116 Received shutdown signal, test time was about 2.000000 seconds 00:26:40.116 00:26:40.116 Latency(us) 00:26:40.116 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:40.116 =================================================================================================================== 00:26:40.116 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:40.116 09:08:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:40.116 09:08:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:40.116 09:08:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86086' 00:26:40.116 09:08:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 86086 00:26:40.116 09:08:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 86086 00:26:41.053 09:08:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 85836 00:26:41.053 09:08:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 85836 ']' 00:26:41.053 09:08:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 85836 00:26:41.053 09:08:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:26:41.053 09:08:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:41.053 09:08:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85836 00:26:41.053 killing process with pid 85836 00:26:41.053 09:08:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:41.053 09:08:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:41.053 09:08:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85836' 00:26:41.053 09:08:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 85836 00:26:41.053 09:08:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 85836 00:26:42.430 ************************************ 00:26:42.430 END TEST nvmf_digest_clean 00:26:42.430 ************************************ 00:26:42.430 00:26:42.430 real 0m24.753s 00:26:42.430 user 0m46.825s 00:26:42.430 sys 0m4.966s 00:26:42.430 09:08:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:42.430 09:08:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:42.430 09:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:26:42.430 09:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:42.430 09:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:42.430 09:08:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:42.430 ************************************ 00:26:42.430 START TEST nvmf_digest_error 00:26:42.430 ************************************ 00:26:42.430 09:08:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:26:42.430 09:08:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:26:42.430 09:08:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:42.430 09:08:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:42.430 09:08:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:42.430 09:08:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=86188 00:26:42.430 09:08:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 86188 00:26:42.430 09:08:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:42.430 09:08:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 86188 ']' 00:26:42.430 09:08:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:42.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:42.430 09:08:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:42.430 09:08:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:42.430 09:08:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:42.430 09:08:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:42.430 [2024-07-25 09:08:49.535830] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:26:42.430 [2024-07-25 09:08:49.536034] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:42.688 [2024-07-25 09:08:49.707848] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:42.947 [2024-07-25 09:08:49.945453] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:42.947 [2024-07-25 09:08:49.945532] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:42.947 [2024-07-25 09:08:49.945552] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:42.947 [2024-07-25 09:08:49.945569] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:42.947 [2024-07-25 09:08:49.945583] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:42.947 [2024-07-25 09:08:49.945634] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:43.513 09:08:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:43.513 09:08:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:26:43.513 09:08:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:43.513 09:08:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:43.513 09:08:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:43.513 09:08:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:43.513 09:08:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:26:43.513 09:08:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.513 09:08:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:43.513 [2024-07-25 09:08:50.422517] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:26:43.513 09:08:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.513 09:08:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:26:43.513 09:08:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:26:43.513 09:08:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.513 09:08:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:43.772 [2024-07-25 09:08:50.638585] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:26:43.772 null0 00:26:43.772 [2024-07-25 09:08:50.765448] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:43.772 [2024-07-25 09:08:50.789663] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:43.772 09:08:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.772 09:08:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:26:43.772 09:08:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:43.772 09:08:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:43.772 09:08:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:43.772 09:08:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:43.772 09:08:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=86226 00:26:43.772 09:08:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:26:43.772 09:08:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 86226 /var/tmp/bperf.sock 00:26:43.772 09:08:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 86226 ']' 00:26:43.772 09:08:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:43.772 09:08:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:43.772 09:08:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:43.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:43.772 09:08:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:43.772 09:08:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:44.030 [2024-07-25 09:08:50.904215] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:26:44.030 [2024-07-25 09:08:50.904613] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86226 ] 00:26:44.030 [2024-07-25 09:08:51.077133] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:44.289 [2024-07-25 09:08:51.312529] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:44.547 [2024-07-25 09:08:51.515037] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:26:44.804 09:08:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:44.804 09:08:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:26:44.804 09:08:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:44.804 09:08:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:45.062 09:08:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:45.062 09:08:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.063 09:08:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:45.063 09:08:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.063 09:08:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:45.063 09:08:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:45.629 nvme0n1 00:26:45.629 09:08:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:45.629 09:08:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.629 09:08:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:45.629 09:08:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.629 09:08:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:45.629 09:08:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:45.629 Running I/O for 2 seconds... 00:26:45.629 [2024-07-25 09:08:52.659937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:45.629 [2024-07-25 09:08:52.660024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.629 [2024-07-25 09:08:52.660057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.629 [2024-07-25 09:08:52.681250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:45.629 [2024-07-25 09:08:52.681331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.629 [2024-07-25 09:08:52.681354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.629 [2024-07-25 09:08:52.702086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:45.629 [2024-07-25 09:08:52.702159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.629 [2024-07-25 09:08:52.702189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.629 [2024-07-25 09:08:52.722748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:45.629 [2024-07-25 09:08:52.722824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.629 [2024-07-25 09:08:52.722883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.888 [2024-07-25 09:08:52.744683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:45.888 [2024-07-25 09:08:52.744741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.888 [2024-07-25 09:08:52.744768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.888 [2024-07-25 09:08:52.767126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:45.888 [2024-07-25 09:08:52.767191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.888 [2024-07-25 09:08:52.767244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.888 [2024-07-25 09:08:52.787506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:45.888 [2024-07-25 09:08:52.787574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.888 [2024-07-25 09:08:52.787600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.888 [2024-07-25 09:08:52.806756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:45.888 [2024-07-25 09:08:52.806865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.888 [2024-07-25 09:08:52.806889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.888 [2024-07-25 09:08:52.826362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:45.888 [2024-07-25 09:08:52.826432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:5324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.888 [2024-07-25 09:08:52.826459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.888 [2024-07-25 09:08:52.846404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:45.888 [2024-07-25 09:08:52.846478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:7794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.888 [2024-07-25 09:08:52.846501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.888 [2024-07-25 09:08:52.866051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:45.888 [2024-07-25 09:08:52.866120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:8281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.888 [2024-07-25 09:08:52.866145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.888 [2024-07-25 09:08:52.885617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:45.888 [2024-07-25 09:08:52.885708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.888 [2024-07-25 09:08:52.885729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.888 [2024-07-25 09:08:52.905597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:45.888 [2024-07-25 09:08:52.905670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:18130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.888 [2024-07-25 09:08:52.905696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.888 [2024-07-25 09:08:52.925708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:45.888 [2024-07-25 09:08:52.925785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.888 [2024-07-25 09:08:52.925806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.888 [2024-07-25 09:08:52.945716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:45.888 [2024-07-25 09:08:52.945793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.889 [2024-07-25 09:08:52.945834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.889 [2024-07-25 09:08:52.965487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:45.889 [2024-07-25 09:08:52.965567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:11220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.889 [2024-07-25 09:08:52.965604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.889 [2024-07-25 09:08:52.985734] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:45.889 [2024-07-25 09:08:52.985805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:17417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.889 [2024-07-25 09:08:52.985896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.148 [2024-07-25 09:08:53.007141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:46.148 [2024-07-25 09:08:53.007215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:3321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.148 [2024-07-25 09:08:53.007237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.148 [2024-07-25 09:08:53.027291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:46.148 [2024-07-25 09:08:53.027360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:24296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.148 [2024-07-25 09:08:53.027390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.148 [2024-07-25 09:08:53.047385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:46.148 [2024-07-25 09:08:53.047461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:18653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.148 [2024-07-25 09:08:53.047484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.148 [2024-07-25 09:08:53.067668] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:46.148 [2024-07-25 09:08:53.067737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:12377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.148 [2024-07-25 09:08:53.067765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.148 [2024-07-25 09:08:53.087781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:46.148 [2024-07-25 09:08:53.087923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:1371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.148 [2024-07-25 09:08:53.087949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.148 [2024-07-25 09:08:53.108814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:46.148 [2024-07-25 09:08:53.108898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:18603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.148 [2024-07-25 09:08:53.108942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.148 [2024-07-25 09:08:53.130073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:46.148 [2024-07-25 09:08:53.130151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.148 [2024-07-25 09:08:53.130174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.148 [2024-07-25 09:08:53.150626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:46.148 [2024-07-25 09:08:53.150698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:2204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.148 [2024-07-25 09:08:53.150725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.148 [2024-07-25 09:08:53.171249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:46.148 [2024-07-25 09:08:53.171331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:9479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.148 [2024-07-25 09:08:53.171354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.148 [2024-07-25 09:08:53.191394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:46.148 [2024-07-25 09:08:53.191467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:21384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.148 [2024-07-25 09:08:53.191528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.148 [2024-07-25 09:08:53.211933] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:46.148 [2024-07-25 09:08:53.212009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:1659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.148 [2024-07-25 09:08:53.212032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.148 [2024-07-25 09:08:53.231718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:46.148 [2024-07-25 09:08:53.231775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:10395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.148 [2024-07-25 09:08:53.231818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.148 [2024-07-25 09:08:53.254059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:46.148 [2024-07-25 09:08:53.254135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:25460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.148 [2024-07-25 09:08:53.254157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.407 [2024-07-25 09:08:53.276261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:46.407 [2024-07-25 09:08:53.276320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:13057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.407 [2024-07-25 09:08:53.276351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.407 [2024-07-25 09:08:53.297960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:46.407 [2024-07-25 09:08:53.298025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:15227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.407 [2024-07-25 09:08:53.298047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.407 [2024-07-25 09:08:53.319191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:46.407 [2024-07-25 09:08:53.319260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:2321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.407 [2024-07-25 09:08:53.319288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.407 [2024-07-25 09:08:53.340744] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:46.407 [2024-07-25 09:08:53.340810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:1158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.407 [2024-07-25 09:08:53.340853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.407 [2024-07-25 09:08:53.362541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:46.407 [2024-07-25 09:08:53.362604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:22149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.407 [2024-07-25 09:08:53.362636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.407 [2024-07-25 09:08:53.384518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:46.407 [2024-07-25 09:08:53.384606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:21149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.407 [2024-07-25 09:08:53.384629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.407 [2024-07-25 09:08:53.407111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:46.407 [2024-07-25 09:08:53.407185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:18855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.407 [2024-07-25 09:08:53.407214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.407 [2024-07-25 09:08:53.428804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:46.407 [2024-07-25 09:08:53.428886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:7828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.407 [2024-07-25 09:08:53.428910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.407 [2024-07-25 09:08:53.450290] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:46.407 [2024-07-25 09:08:53.450351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:3962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.407 [2024-07-25 09:08:53.450379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.407 [2024-07-25 09:08:53.478046] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:46.407 [2024-07-25 09:08:53.478165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:25521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.407 [2024-07-25 09:08:53.478195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.407 [2024-07-25 09:08:53.499643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:46.407 [2024-07-25 09:08:53.499750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:9973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.407 [2024-07-25 09:08:53.499776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.665 [2024-07-25 09:08:53.521726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:46.665 [2024-07-25 09:08:53.521865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:14435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.665 [2024-07-25 09:08:53.521913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.665 [2024-07-25 09:08:53.543858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:46.665 [2024-07-25 09:08:53.544006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:16173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.665 [2024-07-25 09:08:53.544035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.665 [2024-07-25 09:08:53.565775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:46.665 [2024-07-25 09:08:53.565884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:6641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.665 [2024-07-25 09:08:53.565929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.665 [2024-07-25 09:08:53.588880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:46.665 [2024-07-25 09:08:53.589015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:6278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.665 [2024-07-25 09:08:53.589042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.665 [2024-07-25 09:08:53.612838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:46.665 [2024-07-25 09:08:53.613016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:2857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.665 [2024-07-25 09:08:53.613047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.665 [2024-07-25 09:08:53.636568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:46.665 [2024-07-25 09:08:53.636690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:16610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.665 [2024-07-25 09:08:53.636717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.665 [2024-07-25 09:08:53.659752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:46.665 [2024-07-25 09:08:53.659885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:3597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.665 [2024-07-25 09:08:53.659918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.665 [2024-07-25 09:08:53.683958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:46.665 [2024-07-25 09:08:53.684096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:23251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.665 [2024-07-25 09:08:53.684123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.665 [2024-07-25 09:08:53.708852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:46.665 [2024-07-25 09:08:53.709003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:3210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.665 [2024-07-25 09:08:53.709052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.665 [2024-07-25 09:08:53.736885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:46.665 [2024-07-25 09:08:53.737032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:10186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.665 [2024-07-25 09:08:53.737067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.665 [2024-07-25 09:08:53.763810] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:46.665 [2024-07-25 09:08:53.763986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:14369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.665 [2024-07-25 09:08:53.764043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.924 [2024-07-25 09:08:53.791633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:46.924 [2024-07-25 09:08:53.791851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:9542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.924 [2024-07-25 09:08:53.791923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.924 [2024-07-25 09:08:53.820473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:46.924 [2024-07-25 09:08:53.820664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:13469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.924 [2024-07-25 09:08:53.820727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.924 [2024-07-25 09:08:53.851471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:46.924 [2024-07-25 09:08:53.851601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:7430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.924 [2024-07-25 09:08:53.851635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.924 [2024-07-25 09:08:53.877044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:46.924 [2024-07-25 09:08:53.877155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:4541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.924 [2024-07-25 09:08:53.877193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.924 [2024-07-25 09:08:53.902732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:46.924 [2024-07-25 09:08:53.902865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:8247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.924 [2024-07-25 09:08:53.902908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.924 [2024-07-25 09:08:53.928730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:46.924 [2024-07-25 09:08:53.928873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:23527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.924 [2024-07-25 09:08:53.928906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.924 [2024-07-25 09:08:53.954324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:46.924 [2024-07-25 09:08:53.954441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:2472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.924 [2024-07-25 09:08:53.954474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.924 [2024-07-25 09:08:53.980136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:46.924 [2024-07-25 09:08:53.980259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:21467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.924 [2024-07-25 09:08:53.980288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.924 [2024-07-25 09:08:54.005685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:46.924 [2024-07-25 09:08:54.005805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:24306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.924 [2024-07-25 09:08:54.005870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.924 [2024-07-25 09:08:54.031360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:46.924 [2024-07-25 09:08:54.031495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:3422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.924 [2024-07-25 09:08:54.031524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.184 [2024-07-25 09:08:54.054292] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:47.184 [2024-07-25 09:08:54.054395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:24753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.184 [2024-07-25 09:08:54.054423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.184 [2024-07-25 09:08:54.087395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:47.184 [2024-07-25 09:08:54.087547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:19691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.184 [2024-07-25 09:08:54.087591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.184 [2024-07-25 09:08:54.109800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:47.184 [2024-07-25 09:08:54.109922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:17838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.184 [2024-07-25 09:08:54.109946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.184 [2024-07-25 09:08:54.132118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:47.184 [2024-07-25 09:08:54.132223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:19558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.184 [2024-07-25 09:08:54.132267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.184 [2024-07-25 09:08:54.155090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:47.184 [2024-07-25 09:08:54.155212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:8468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.184 [2024-07-25 09:08:54.155252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.184 [2024-07-25 09:08:54.177436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:47.184 [2024-07-25 09:08:54.177539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:11977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.184 [2024-07-25 09:08:54.177568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.184 [2024-07-25 09:08:54.199599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:47.184 [2024-07-25 09:08:54.199720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:2967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.184 [2024-07-25 09:08:54.199744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.184 [2024-07-25 09:08:54.221898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:47.184 [2024-07-25 09:08:54.222011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:22230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.184 [2024-07-25 09:08:54.222038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.184 [2024-07-25 09:08:54.244858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:47.184 [2024-07-25 09:08:54.245018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:8112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.184 [2024-07-25 09:08:54.245041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.184 [2024-07-25 09:08:54.268031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:47.184 [2024-07-25 09:08:54.268149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:8388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.184 [2024-07-25 09:08:54.268178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.184 [2024-07-25 09:08:54.291135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:47.184 [2024-07-25 09:08:54.291244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:14532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.184 [2024-07-25 09:08:54.291268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.443 [2024-07-25 09:08:54.313849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:47.443 [2024-07-25 09:08:54.313960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:4851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.443 [2024-07-25 09:08:54.313986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.443 [2024-07-25 09:08:54.335997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:47.444 [2024-07-25 09:08:54.336059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:18776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.444 [2024-07-25 09:08:54.336083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.444 [2024-07-25 09:08:54.358822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:47.444 [2024-07-25 09:08:54.358925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:12696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.444 [2024-07-25 09:08:54.358974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.444 [2024-07-25 09:08:54.382082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:47.444 [2024-07-25 09:08:54.382192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:23617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.444 [2024-07-25 09:08:54.382216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.444 [2024-07-25 09:08:54.404726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:47.444 [2024-07-25 09:08:54.404780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:9586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.444 [2024-07-25 09:08:54.404808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.444 [2024-07-25 09:08:54.427542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:47.444 [2024-07-25 09:08:54.427653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:15566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.444 [2024-07-25 09:08:54.427678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.444 [2024-07-25 09:08:54.449786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:47.444 [2024-07-25 09:08:54.449850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:9589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.444 [2024-07-25 09:08:54.449874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.444 [2024-07-25 09:08:54.470692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:47.444 [2024-07-25 09:08:54.470772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:11780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.444 [2024-07-25 09:08:54.470793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.444 [2024-07-25 09:08:54.492186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:47.444 [2024-07-25 09:08:54.492257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:3962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.444 [2024-07-25 09:08:54.492282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.444 [2024-07-25 09:08:54.513899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:47.444 [2024-07-25 09:08:54.513995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:8387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.444 [2024-07-25 09:08:54.514017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.444 [2024-07-25 09:08:54.535010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:47.444 [2024-07-25 09:08:54.535053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:2037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.444 [2024-07-25 09:08:54.535075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.444 [2024-07-25 09:08:54.556878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:47.444 [2024-07-25 09:08:54.557000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:14586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.444 [2024-07-25 09:08:54.557023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.703 [2024-07-25 09:08:54.579161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:47.703 [2024-07-25 09:08:54.579230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:18590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.703 [2024-07-25 09:08:54.579259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.703 [2024-07-25 09:08:54.601605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:47.703 [2024-07-25 09:08:54.601711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:9688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.703 [2024-07-25 09:08:54.601735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.703 [2024-07-25 09:08:54.624235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:47.703 [2024-07-25 09:08:54.624284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:2599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.703 [2024-07-25 09:08:54.624306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.703 00:26:47.703 Latency(us) 00:26:47.703 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:47.703 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:47.703 nvme0n1 : 2.01 11252.55 43.96 0.00 0.00 11365.90 9234.62 44087.85 00:26:47.703 =================================================================================================================== 00:26:47.703 Total : 11252.55 43.96 0.00 0.00 11365.90 9234.62 44087.85 00:26:47.703 0 00:26:47.703 09:08:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:47.704 09:08:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:47.704 09:08:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:47.704 09:08:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:47.704 | .driver_specific 00:26:47.704 | .nvme_error 00:26:47.704 | .status_code 00:26:47.704 | .command_transient_transport_error' 00:26:47.962 09:08:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 88 > 0 )) 00:26:47.962 09:08:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 86226 00:26:47.962 09:08:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 86226 ']' 00:26:47.962 09:08:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 86226 00:26:47.962 09:08:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:26:47.962 09:08:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:47.962 09:08:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86226 00:26:47.962 09:08:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:47.962 09:08:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:47.962 killing process with pid 86226 00:26:47.962 09:08:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86226' 00:26:47.962 09:08:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 86226 00:26:47.962 Received shutdown signal, test time was about 2.000000 seconds 00:26:47.962 00:26:47.962 Latency(us) 00:26:47.962 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:47.962 =================================================================================================================== 00:26:47.962 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:47.962 09:08:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 86226 00:26:49.335 09:08:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:26:49.335 09:08:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:49.335 09:08:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:49.335 09:08:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:49.335 09:08:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:49.335 09:08:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=86297 00:26:49.335 09:08:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 86297 /var/tmp/bperf.sock 00:26:49.335 09:08:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:26:49.335 09:08:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 86297 ']' 00:26:49.335 09:08:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:49.335 09:08:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:49.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:49.335 09:08:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:49.335 09:08:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:49.335 09:08:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:49.335 [2024-07-25 09:08:56.234686] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:26:49.335 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:49.335 Zero copy mechanism will not be used. 00:26:49.335 [2024-07-25 09:08:56.234879] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86297 ] 00:26:49.335 [2024-07-25 09:08:56.409589] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:49.592 [2024-07-25 09:08:56.676881] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:49.850 [2024-07-25 09:08:56.902906] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:26:50.108 09:08:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:50.108 09:08:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:26:50.108 09:08:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:50.108 09:08:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:50.366 09:08:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:50.366 09:08:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.366 09:08:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:50.366 09:08:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.366 09:08:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:50.366 09:08:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:50.624 nvme0n1 00:26:50.883 09:08:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:50.883 09:08:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.883 09:08:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:50.883 09:08:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.883 09:08:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:50.883 09:08:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:50.883 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:50.883 Zero copy mechanism will not be used. 00:26:50.883 Running I/O for 2 seconds... 00:26:50.883 [2024-07-25 09:08:57.869228] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:50.883 [2024-07-25 09:08:57.869325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.883 [2024-07-25 09:08:57.869352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:50.883 [2024-07-25 09:08:57.875195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:50.883 [2024-07-25 09:08:57.875256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.883 [2024-07-25 09:08:57.875278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:50.883 [2024-07-25 09:08:57.881189] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:50.883 [2024-07-25 09:08:57.881237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.883 [2024-07-25 09:08:57.881263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:50.883 [2024-07-25 09:08:57.887013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:50.883 [2024-07-25 09:08:57.887063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.883 [2024-07-25 09:08:57.887089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.883 [2024-07-25 09:08:57.892903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:50.883 [2024-07-25 09:08:57.892976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.883 [2024-07-25 09:08:57.892998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:50.883 [2024-07-25 09:08:57.898691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:50.883 [2024-07-25 09:08:57.898760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.883 [2024-07-25 09:08:57.898782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:50.883 [2024-07-25 09:08:57.904423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:50.883 [2024-07-25 09:08:57.904477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.883 [2024-07-25 09:08:57.904498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:50.883 [2024-07-25 09:08:57.910043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:50.883 [2024-07-25 09:08:57.910088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.883 [2024-07-25 09:08:57.910113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.884 [2024-07-25 09:08:57.915551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:50.884 [2024-07-25 09:08:57.915597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.884 [2024-07-25 09:08:57.915622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:50.884 [2024-07-25 09:08:57.921235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:50.884 [2024-07-25 09:08:57.921291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.884 [2024-07-25 09:08:57.921313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:50.884 [2024-07-25 09:08:57.926922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:50.884 [2024-07-25 09:08:57.926976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.884 [2024-07-25 09:08:57.926998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:50.884 [2024-07-25 09:08:57.932497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:50.884 [2024-07-25 09:08:57.932543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.884 [2024-07-25 09:08:57.932584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.884 [2024-07-25 09:08:57.938101] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:50.884 [2024-07-25 09:08:57.938163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.884 [2024-07-25 09:08:57.938187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:50.884 [2024-07-25 09:08:57.943627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:50.884 [2024-07-25 09:08:57.943680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.884 [2024-07-25 09:08:57.943702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:50.884 [2024-07-25 09:08:57.949181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:50.884 [2024-07-25 09:08:57.949244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.884 [2024-07-25 09:08:57.949266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:50.884 [2024-07-25 09:08:57.954899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:50.884 [2024-07-25 09:08:57.954961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.884 [2024-07-25 09:08:57.954982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.884 [2024-07-25 09:08:57.960413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:50.884 [2024-07-25 09:08:57.960458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.884 [2024-07-25 09:08:57.960492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:50.884 [2024-07-25 09:08:57.965963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:50.884 [2024-07-25 09:08:57.966009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.884 [2024-07-25 09:08:57.966033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:50.884 [2024-07-25 09:08:57.971308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:50.884 [2024-07-25 09:08:57.971365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.884 [2024-07-25 09:08:57.971386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:50.884 [2024-07-25 09:08:57.976905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:50.884 [2024-07-25 09:08:57.976958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.884 [2024-07-25 09:08:57.976994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.884 [2024-07-25 09:08:57.982442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:50.884 [2024-07-25 09:08:57.982488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.884 [2024-07-25 09:08:57.982513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:50.884 [2024-07-25 09:08:57.987843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:50.884 [2024-07-25 09:08:57.987913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.884 [2024-07-25 09:08:57.987940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:50.884 [2024-07-25 09:08:57.993603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:50.884 [2024-07-25 09:08:57.993659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.884 [2024-07-25 09:08:57.993680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.143 [2024-07-25 09:08:57.999616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.143 [2024-07-25 09:08:57.999672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.144 [2024-07-25 09:08:57.999693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.144 [2024-07-25 09:08:58.005580] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.144 [2024-07-25 09:08:58.005633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.144 [2024-07-25 09:08:58.005654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.144 [2024-07-25 09:08:58.011163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.144 [2024-07-25 09:08:58.011217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.144 [2024-07-25 09:08:58.011240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.144 [2024-07-25 09:08:58.016738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.144 [2024-07-25 09:08:58.016782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.144 [2024-07-25 09:08:58.016807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.144 [2024-07-25 09:08:58.022520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.144 [2024-07-25 09:08:58.022601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.144 [2024-07-25 09:08:58.022623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.144 [2024-07-25 09:08:58.028308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.144 [2024-07-25 09:08:58.028361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.144 [2024-07-25 09:08:58.028382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.144 [2024-07-25 09:08:58.034118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.144 [2024-07-25 09:08:58.034185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.144 [2024-07-25 09:08:58.034224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.144 [2024-07-25 09:08:58.040005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.144 [2024-07-25 09:08:58.040055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.144 [2024-07-25 09:08:58.040081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.144 [2024-07-25 09:08:58.045927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.144 [2024-07-25 09:08:58.045975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.144 [2024-07-25 09:08:58.046005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.144 [2024-07-25 09:08:58.051607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.144 [2024-07-25 09:08:58.051690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.144 [2024-07-25 09:08:58.051713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.144 [2024-07-25 09:08:58.057827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.144 [2024-07-25 09:08:58.057916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.144 [2024-07-25 09:08:58.057939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.144 [2024-07-25 09:08:58.063815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.144 [2024-07-25 09:08:58.063889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.144 [2024-07-25 09:08:58.063926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.144 [2024-07-25 09:08:58.069760] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.144 [2024-07-25 09:08:58.069828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.144 [2024-07-25 09:08:58.069857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.144 [2024-07-25 09:08:58.075627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.144 [2024-07-25 09:08:58.075677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.144 [2024-07-25 09:08:58.075704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.144 [2024-07-25 09:08:58.081583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.144 [2024-07-25 09:08:58.081638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.144 [2024-07-25 09:08:58.081661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.144 [2024-07-25 09:08:58.087434] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.144 [2024-07-25 09:08:58.087491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.144 [2024-07-25 09:08:58.087528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.144 [2024-07-25 09:08:58.093346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.144 [2024-07-25 09:08:58.093395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.144 [2024-07-25 09:08:58.093425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.144 [2024-07-25 09:08:58.099402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.144 [2024-07-25 09:08:58.099451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.144 [2024-07-25 09:08:58.099481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.144 [2024-07-25 09:08:58.105435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.144 [2024-07-25 09:08:58.105497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.144 [2024-07-25 09:08:58.105522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.144 [2024-07-25 09:08:58.111435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.144 [2024-07-25 09:08:58.111491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.144 [2024-07-25 09:08:58.111519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.144 [2024-07-25 09:08:58.117292] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.144 [2024-07-25 09:08:58.117348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.144 [2024-07-25 09:08:58.117370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.144 [2024-07-25 09:08:58.123216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.144 [2024-07-25 09:08:58.123264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.144 [2024-07-25 09:08:58.123290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.144 [2024-07-25 09:08:58.129108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.144 [2024-07-25 09:08:58.129156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.144 [2024-07-25 09:08:58.129181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.144 [2024-07-25 09:08:58.134951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.144 [2024-07-25 09:08:58.135010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.144 [2024-07-25 09:08:58.135032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.144 [2024-07-25 09:08:58.140872] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.144 [2024-07-25 09:08:58.140926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.144 [2024-07-25 09:08:58.140948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.144 [2024-07-25 09:08:58.146816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.145 [2024-07-25 09:08:58.146892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.145 [2024-07-25 09:08:58.146921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.145 [2024-07-25 09:08:58.152628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.145 [2024-07-25 09:08:58.152677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.145 [2024-07-25 09:08:58.152707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.145 [2024-07-25 09:08:58.158334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.145 [2024-07-25 09:08:58.158385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.145 [2024-07-25 09:08:58.158410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.145 [2024-07-25 09:08:58.164076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.145 [2024-07-25 09:08:58.164132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.145 [2024-07-25 09:08:58.164154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.145 [2024-07-25 09:08:58.169860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.145 [2024-07-25 09:08:58.169915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.145 [2024-07-25 09:08:58.169936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.145 [2024-07-25 09:08:58.175584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.145 [2024-07-25 09:08:58.175631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.145 [2024-07-25 09:08:58.175656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.145 [2024-07-25 09:08:58.181523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.145 [2024-07-25 09:08:58.181587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.145 [2024-07-25 09:08:58.181613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.145 [2024-07-25 09:08:58.187582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.145 [2024-07-25 09:08:58.187628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.145 [2024-07-25 09:08:58.187653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.145 [2024-07-25 09:08:58.193323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.145 [2024-07-25 09:08:58.193392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.145 [2024-07-25 09:08:58.193415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.145 [2024-07-25 09:08:58.199156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.145 [2024-07-25 09:08:58.199211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.145 [2024-07-25 09:08:58.199233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.145 [2024-07-25 09:08:58.204993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.145 [2024-07-25 09:08:58.205039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.145 [2024-07-25 09:08:58.205067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.145 [2024-07-25 09:08:58.210765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.145 [2024-07-25 09:08:58.210823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.145 [2024-07-25 09:08:58.210851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.145 [2024-07-25 09:08:58.216538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.145 [2024-07-25 09:08:58.216585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.145 [2024-07-25 09:08:58.216613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.145 [2024-07-25 09:08:58.222165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.145 [2024-07-25 09:08:58.222222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.145 [2024-07-25 09:08:58.222243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.145 [2024-07-25 09:08:58.227638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.145 [2024-07-25 09:08:58.227693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.145 [2024-07-25 09:08:58.227713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.145 [2024-07-25 09:08:58.233329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.145 [2024-07-25 09:08:58.233386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.145 [2024-07-25 09:08:58.233412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.145 [2024-07-25 09:08:58.238814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.145 [2024-07-25 09:08:58.238872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.145 [2024-07-25 09:08:58.238897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.145 [2024-07-25 09:08:58.244412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.145 [2024-07-25 09:08:58.244468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.145 [2024-07-25 09:08:58.244490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.145 [2024-07-25 09:08:58.250059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.145 [2024-07-25 09:08:58.250113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.145 [2024-07-25 09:08:58.250134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.145 [2024-07-25 09:08:58.255666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.145 [2024-07-25 09:08:58.255744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.145 [2024-07-25 09:08:58.255776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.405 [2024-07-25 09:08:58.261511] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.405 [2024-07-25 09:08:58.261574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.405 [2024-07-25 09:08:58.261600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.405 [2024-07-25 09:08:58.267192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.405 [2024-07-25 09:08:58.267238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.405 [2024-07-25 09:08:58.267263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.405 [2024-07-25 09:08:58.272684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.405 [2024-07-25 09:08:58.272743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.405 [2024-07-25 09:08:58.272763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.405 [2024-07-25 09:08:58.278341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.405 [2024-07-25 09:08:58.278412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.405 [2024-07-25 09:08:58.278450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.405 [2024-07-25 09:08:58.283881] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.405 [2024-07-25 09:08:58.283944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.405 [2024-07-25 09:08:58.283968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.405 [2024-07-25 09:08:58.289400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.405 [2024-07-25 09:08:58.289461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.405 [2024-07-25 09:08:58.289494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.405 [2024-07-25 09:08:58.294875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.405 [2024-07-25 09:08:58.294928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.405 [2024-07-25 09:08:58.294949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.405 [2024-07-25 09:08:58.300431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.405 [2024-07-25 09:08:58.300489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.405 [2024-07-25 09:08:58.300510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.405 [2024-07-25 09:08:58.306036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.405 [2024-07-25 09:08:58.306080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.405 [2024-07-25 09:08:58.306108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.405 [2024-07-25 09:08:58.311744] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.405 [2024-07-25 09:08:58.311791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.405 [2024-07-25 09:08:58.311847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.405 [2024-07-25 09:08:58.317798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.405 [2024-07-25 09:08:58.317892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.405 [2024-07-25 09:08:58.317920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.405 [2024-07-25 09:08:58.323534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.405 [2024-07-25 09:08:58.323605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.405 [2024-07-25 09:08:58.323627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.405 [2024-07-25 09:08:58.329373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.405 [2024-07-25 09:08:58.329428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.405 [2024-07-25 09:08:58.329450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.405 [2024-07-25 09:08:58.335360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.405 [2024-07-25 09:08:58.335407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.405 [2024-07-25 09:08:58.335435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.405 [2024-07-25 09:08:58.341277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.405 [2024-07-25 09:08:58.341323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.405 [2024-07-25 09:08:58.341348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.405 [2024-07-25 09:08:58.347014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.405 [2024-07-25 09:08:58.347058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.405 [2024-07-25 09:08:58.347081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.405 [2024-07-25 09:08:58.352546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.405 [2024-07-25 09:08:58.352602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.405 [2024-07-25 09:08:58.352623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.405 [2024-07-25 09:08:58.358109] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.405 [2024-07-25 09:08:58.358163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.405 [2024-07-25 09:08:58.358183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.405 [2024-07-25 09:08:58.363532] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.405 [2024-07-25 09:08:58.363610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.405 [2024-07-25 09:08:58.363634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.405 [2024-07-25 09:08:58.368934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.405 [2024-07-25 09:08:58.368979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.405 [2024-07-25 09:08:58.369003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.405 [2024-07-25 09:08:58.374310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.405 [2024-07-25 09:08:58.374361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.405 [2024-07-25 09:08:58.374381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.405 [2024-07-25 09:08:58.379746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.405 [2024-07-25 09:08:58.379826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.405 [2024-07-25 09:08:58.379861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.405 [2024-07-25 09:08:58.385174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.405 [2024-07-25 09:08:58.385231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.405 [2024-07-25 09:08:58.385262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.405 [2024-07-25 09:08:58.390435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.405 [2024-07-25 09:08:58.390480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.405 [2024-07-25 09:08:58.390504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.405 [2024-07-25 09:08:58.395674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.405 [2024-07-25 09:08:58.395735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.406 [2024-07-25 09:08:58.395756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.406 [2024-07-25 09:08:58.401242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.406 [2024-07-25 09:08:58.401314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.406 [2024-07-25 09:08:58.401352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.406 [2024-07-25 09:08:58.407061] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.406 [2024-07-25 09:08:58.407105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.406 [2024-07-25 09:08:58.407133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.406 [2024-07-25 09:08:58.412569] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.406 [2024-07-25 09:08:58.412614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.406 [2024-07-25 09:08:58.412657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.406 [2024-07-25 09:08:58.417973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.406 [2024-07-25 09:08:58.418018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.406 [2024-07-25 09:08:58.418041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.406 [2024-07-25 09:08:58.423215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.406 [2024-07-25 09:08:58.423267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.406 [2024-07-25 09:08:58.423288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.406 [2024-07-25 09:08:58.428768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.406 [2024-07-25 09:08:58.428838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.406 [2024-07-25 09:08:58.428861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.406 [2024-07-25 09:08:58.434371] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.406 [2024-07-25 09:08:58.434417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.406 [2024-07-25 09:08:58.434458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.406 [2024-07-25 09:08:58.440284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.406 [2024-07-25 09:08:58.440333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.406 [2024-07-25 09:08:58.440364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.406 [2024-07-25 09:08:58.445936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.406 [2024-07-25 09:08:58.445988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.406 [2024-07-25 09:08:58.446009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.406 [2024-07-25 09:08:58.451841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.406 [2024-07-25 09:08:58.451946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.406 [2024-07-25 09:08:58.451969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.406 [2024-07-25 09:08:58.457519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.406 [2024-07-25 09:08:58.457565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.406 [2024-07-25 09:08:58.457593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.406 [2024-07-25 09:08:58.462937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.406 [2024-07-25 09:08:58.462981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.406 [2024-07-25 09:08:58.463004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.406 [2024-07-25 09:08:58.468275] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.406 [2024-07-25 09:08:58.468319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.406 [2024-07-25 09:08:58.468350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.406 [2024-07-25 09:08:58.473482] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.406 [2024-07-25 09:08:58.473538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.406 [2024-07-25 09:08:58.473559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.406 [2024-07-25 09:08:58.478903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.406 [2024-07-25 09:08:58.478954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.406 [2024-07-25 09:08:58.478976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.406 [2024-07-25 09:08:58.484594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.406 [2024-07-25 09:08:58.484642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.406 [2024-07-25 09:08:58.484666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.406 [2024-07-25 09:08:58.490438] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.406 [2024-07-25 09:08:58.490484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.406 [2024-07-25 09:08:58.490508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.406 [2024-07-25 09:08:58.496097] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.406 [2024-07-25 09:08:58.496152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.406 [2024-07-25 09:08:58.496175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.406 [2024-07-25 09:08:58.501874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.406 [2024-07-25 09:08:58.501927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.406 [2024-07-25 09:08:58.501964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.406 [2024-07-25 09:08:58.507672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.406 [2024-07-25 09:08:58.507720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.406 [2024-07-25 09:08:58.507748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.406 [2024-07-25 09:08:58.513536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.406 [2024-07-25 09:08:58.513585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.406 [2024-07-25 09:08:58.513610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.666 [2024-07-25 09:08:58.519334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.666 [2024-07-25 09:08:58.519382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.666 [2024-07-25 09:08:58.519419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.666 [2024-07-25 09:08:58.525385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.666 [2024-07-25 09:08:58.525456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.666 [2024-07-25 09:08:58.525478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.666 [2024-07-25 09:08:58.531487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.666 [2024-07-25 09:08:58.531562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.666 [2024-07-25 09:08:58.531602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.666 [2024-07-25 09:08:58.537545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.666 [2024-07-25 09:08:58.537597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.666 [2024-07-25 09:08:58.537642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.666 [2024-07-25 09:08:58.543489] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.666 [2024-07-25 09:08:58.543565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.666 [2024-07-25 09:08:58.543606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.666 [2024-07-25 09:08:58.549386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.666 [2024-07-25 09:08:58.549436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.666 [2024-07-25 09:08:58.549460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.666 [2024-07-25 09:08:58.554895] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.666 [2024-07-25 09:08:58.554949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.666 [2024-07-25 09:08:58.554971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.666 [2024-07-25 09:08:58.560558] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.666 [2024-07-25 09:08:58.560632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.666 [2024-07-25 09:08:58.560653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.666 [2024-07-25 09:08:58.566325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.667 [2024-07-25 09:08:58.566371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.667 [2024-07-25 09:08:58.566391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.667 [2024-07-25 09:08:58.571977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.667 [2024-07-25 09:08:58.572024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.667 [2024-07-25 09:08:58.572045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.667 [2024-07-25 09:08:58.577543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.667 [2024-07-25 09:08:58.577591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.667 [2024-07-25 09:08:58.577612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.667 [2024-07-25 09:08:58.583428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.667 [2024-07-25 09:08:58.583477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.667 [2024-07-25 09:08:58.583498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.667 [2024-07-25 09:08:58.589299] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.667 [2024-07-25 09:08:58.589348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.667 [2024-07-25 09:08:58.589369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.667 [2024-07-25 09:08:58.594950] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.667 [2024-07-25 09:08:58.595013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.667 [2024-07-25 09:08:58.595034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.667 [2024-07-25 09:08:58.600714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.667 [2024-07-25 09:08:58.600763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.667 [2024-07-25 09:08:58.600783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.667 [2024-07-25 09:08:58.606447] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.667 [2024-07-25 09:08:58.606493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.667 [2024-07-25 09:08:58.606513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.667 [2024-07-25 09:08:58.612356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.667 [2024-07-25 09:08:58.612401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.667 [2024-07-25 09:08:58.612421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.667 [2024-07-25 09:08:58.618159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.667 [2024-07-25 09:08:58.618233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.667 [2024-07-25 09:08:58.618253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.667 [2024-07-25 09:08:58.623662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.667 [2024-07-25 09:08:58.623710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.667 [2024-07-25 09:08:58.623730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.667 [2024-07-25 09:08:58.629310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.667 [2024-07-25 09:08:58.629368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.667 [2024-07-25 09:08:58.629389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.667 [2024-07-25 09:08:58.634889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.667 [2024-07-25 09:08:58.634935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.667 [2024-07-25 09:08:58.634955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.667 [2024-07-25 09:08:58.640482] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.667 [2024-07-25 09:08:58.640529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.667 [2024-07-25 09:08:58.640548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.667 [2024-07-25 09:08:58.645981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.667 [2024-07-25 09:08:58.646026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.667 [2024-07-25 09:08:58.646045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.667 [2024-07-25 09:08:58.651536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.667 [2024-07-25 09:08:58.651617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.667 [2024-07-25 09:08:58.651640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.667 [2024-07-25 09:08:58.657153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.667 [2024-07-25 09:08:58.657198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.667 [2024-07-25 09:08:58.657218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.667 [2024-07-25 09:08:58.662868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.667 [2024-07-25 09:08:58.662935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.667 [2024-07-25 09:08:58.662957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.667 [2024-07-25 09:08:58.668704] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.667 [2024-07-25 09:08:58.668753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.667 [2024-07-25 09:08:58.668774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.667 [2024-07-25 09:08:58.674443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.667 [2024-07-25 09:08:58.674509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.667 [2024-07-25 09:08:58.674530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.667 [2024-07-25 09:08:58.680294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.667 [2024-07-25 09:08:58.680360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.667 [2024-07-25 09:08:58.680382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.667 [2024-07-25 09:08:58.686090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.667 [2024-07-25 09:08:58.686153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.667 [2024-07-25 09:08:58.686189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.667 [2024-07-25 09:08:58.692064] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.667 [2024-07-25 09:08:58.692113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.667 [2024-07-25 09:08:58.692135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.667 [2024-07-25 09:08:58.697824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.667 [2024-07-25 09:08:58.697885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.668 [2024-07-25 09:08:58.697906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.668 [2024-07-25 09:08:58.703524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.668 [2024-07-25 09:08:58.703590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.668 [2024-07-25 09:08:58.703611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.668 [2024-07-25 09:08:58.709168] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.668 [2024-07-25 09:08:58.709215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.668 [2024-07-25 09:08:58.709235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.668 [2024-07-25 09:08:58.714636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.668 [2024-07-25 09:08:58.714683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.668 [2024-07-25 09:08:58.714702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.668 [2024-07-25 09:08:58.720360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.668 [2024-07-25 09:08:58.720407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.668 [2024-07-25 09:08:58.720427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.668 [2024-07-25 09:08:58.726004] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.668 [2024-07-25 09:08:58.726049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.668 [2024-07-25 09:08:58.726068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.668 [2024-07-25 09:08:58.731846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.668 [2024-07-25 09:08:58.731936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.668 [2024-07-25 09:08:58.731958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.668 [2024-07-25 09:08:58.737583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.668 [2024-07-25 09:08:58.737633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.668 [2024-07-25 09:08:58.737654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.668 [2024-07-25 09:08:58.743399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.668 [2024-07-25 09:08:58.743450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.668 [2024-07-25 09:08:58.743472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.668 [2024-07-25 09:08:58.749269] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.668 [2024-07-25 09:08:58.749316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.668 [2024-07-25 09:08:58.749337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.668 [2024-07-25 09:08:58.755098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.668 [2024-07-25 09:08:58.755145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.668 [2024-07-25 09:08:58.755165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.668 [2024-07-25 09:08:58.761007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.668 [2024-07-25 09:08:58.761053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.668 [2024-07-25 09:08:58.761089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.668 [2024-07-25 09:08:58.766614] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.668 [2024-07-25 09:08:58.766661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.668 [2024-07-25 09:08:58.766682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.668 [2024-07-25 09:08:58.772397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.668 [2024-07-25 09:08:58.772445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.668 [2024-07-25 09:08:58.772465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.668 [2024-07-25 09:08:58.778250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.668 [2024-07-25 09:08:58.778297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.668 [2024-07-25 09:08:58.778318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.927 [2024-07-25 09:08:58.783838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.927 [2024-07-25 09:08:58.783935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.927 [2024-07-25 09:08:58.783957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.927 [2024-07-25 09:08:58.789492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.927 [2024-07-25 09:08:58.789538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.927 [2024-07-25 09:08:58.789558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.927 [2024-07-25 09:08:58.794942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.927 [2024-07-25 09:08:58.794990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.928 [2024-07-25 09:08:58.795009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.928 [2024-07-25 09:08:58.800520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.928 [2024-07-25 09:08:58.800566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.928 [2024-07-25 09:08:58.800585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.928 [2024-07-25 09:08:58.806113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.928 [2024-07-25 09:08:58.806162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.928 [2024-07-25 09:08:58.806183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.928 [2024-07-25 09:08:58.811572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.928 [2024-07-25 09:08:58.811618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.928 [2024-07-25 09:08:58.811638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.928 [2024-07-25 09:08:58.817121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.928 [2024-07-25 09:08:58.817166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.928 [2024-07-25 09:08:58.817186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.928 [2024-07-25 09:08:58.822559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.928 [2024-07-25 09:08:58.822606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.928 [2024-07-25 09:08:58.822626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.928 [2024-07-25 09:08:58.828075] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.928 [2024-07-25 09:08:58.828123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.928 [2024-07-25 09:08:58.828144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.928 [2024-07-25 09:08:58.833654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.928 [2024-07-25 09:08:58.833702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.928 [2024-07-25 09:08:58.833722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.928 [2024-07-25 09:08:58.839206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.928 [2024-07-25 09:08:58.839251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.928 [2024-07-25 09:08:58.839270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.928 [2024-07-25 09:08:58.844803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.928 [2024-07-25 09:08:58.844865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.928 [2024-07-25 09:08:58.844886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.928 [2024-07-25 09:08:58.850264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.928 [2024-07-25 09:08:58.850310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.928 [2024-07-25 09:08:58.850330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.928 [2024-07-25 09:08:58.855669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.928 [2024-07-25 09:08:58.855715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.928 [2024-07-25 09:08:58.855735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.928 [2024-07-25 09:08:58.861183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.928 [2024-07-25 09:08:58.861231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.928 [2024-07-25 09:08:58.861251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.928 [2024-07-25 09:08:58.866616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.928 [2024-07-25 09:08:58.866661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.928 [2024-07-25 09:08:58.866681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.928 [2024-07-25 09:08:58.872179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.928 [2024-07-25 09:08:58.872260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.928 [2024-07-25 09:08:58.872280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.928 [2024-07-25 09:08:58.877761] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.928 [2024-07-25 09:08:58.877809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.928 [2024-07-25 09:08:58.877846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.928 [2024-07-25 09:08:58.883374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.928 [2024-07-25 09:08:58.883419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.928 [2024-07-25 09:08:58.883437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.928 [2024-07-25 09:08:58.889036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.928 [2024-07-25 09:08:58.889081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.928 [2024-07-25 09:08:58.889100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.928 [2024-07-25 09:08:58.894517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.928 [2024-07-25 09:08:58.894565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.928 [2024-07-25 09:08:58.894586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.928 [2024-07-25 09:08:58.900240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.928 [2024-07-25 09:08:58.900289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.928 [2024-07-25 09:08:58.900310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.928 [2024-07-25 09:08:58.906026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.928 [2024-07-25 09:08:58.906073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.928 [2024-07-25 09:08:58.906093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.928 [2024-07-25 09:08:58.911755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.928 [2024-07-25 09:08:58.911800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.928 [2024-07-25 09:08:58.911833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.928 [2024-07-25 09:08:58.917634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.928 [2024-07-25 09:08:58.917683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.928 [2024-07-25 09:08:58.917704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.928 [2024-07-25 09:08:58.923417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.928 [2024-07-25 09:08:58.923478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.928 [2024-07-25 09:08:58.923499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.929 [2024-07-25 09:08:58.929344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.929 [2024-07-25 09:08:58.929392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.929 [2024-07-25 09:08:58.929413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.929 [2024-07-25 09:08:58.935371] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.929 [2024-07-25 09:08:58.935426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.929 [2024-07-25 09:08:58.935447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.929 [2024-07-25 09:08:58.941358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.929 [2024-07-25 09:08:58.941407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.929 [2024-07-25 09:08:58.941428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.929 [2024-07-25 09:08:58.947271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.929 [2024-07-25 09:08:58.947318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.929 [2024-07-25 09:08:58.947339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.929 [2024-07-25 09:08:58.953166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.929 [2024-07-25 09:08:58.953214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.929 [2024-07-25 09:08:58.953237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.929 [2024-07-25 09:08:58.958762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.929 [2024-07-25 09:08:58.958806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.929 [2024-07-25 09:08:58.958845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.929 [2024-07-25 09:08:58.964199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.929 [2024-07-25 09:08:58.964269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.929 [2024-07-25 09:08:58.964289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.929 [2024-07-25 09:08:58.969641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.929 [2024-07-25 09:08:58.969687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.929 [2024-07-25 09:08:58.969707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.929 [2024-07-25 09:08:58.974947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.929 [2024-07-25 09:08:58.974992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.929 [2024-07-25 09:08:58.975011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.929 [2024-07-25 09:08:58.980301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.929 [2024-07-25 09:08:58.980346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.929 [2024-07-25 09:08:58.980367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.929 [2024-07-25 09:08:58.985696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.929 [2024-07-25 09:08:58.985742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.929 [2024-07-25 09:08:58.985761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.929 [2024-07-25 09:08:58.990923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.929 [2024-07-25 09:08:58.990967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.929 [2024-07-25 09:08:58.990986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.929 [2024-07-25 09:08:58.996454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.929 [2024-07-25 09:08:58.996501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.929 [2024-07-25 09:08:58.996520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.929 [2024-07-25 09:08:59.002277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.929 [2024-07-25 09:08:59.002330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.929 [2024-07-25 09:08:59.002366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.929 [2024-07-25 09:08:59.008293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.929 [2024-07-25 09:08:59.008345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.929 [2024-07-25 09:08:59.008365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.929 [2024-07-25 09:08:59.013658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.929 [2024-07-25 09:08:59.013704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.929 [2024-07-25 09:08:59.013724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.929 [2024-07-25 09:08:59.019159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.929 [2024-07-25 09:08:59.019211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.929 [2024-07-25 09:08:59.019231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.929 [2024-07-25 09:08:59.024749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.929 [2024-07-25 09:08:59.024796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.929 [2024-07-25 09:08:59.024828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.929 [2024-07-25 09:08:59.030031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.929 [2024-07-25 09:08:59.030076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.929 [2024-07-25 09:08:59.030096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.929 [2024-07-25 09:08:59.035323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:51.929 [2024-07-25 09:08:59.035367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.929 [2024-07-25 09:08:59.035388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.189 [2024-07-25 09:08:59.041144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.189 [2024-07-25 09:08:59.041191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.189 [2024-07-25 09:08:59.041227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.189 [2024-07-25 09:08:59.046724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.189 [2024-07-25 09:08:59.046787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.189 [2024-07-25 09:08:59.046807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.189 [2024-07-25 09:08:59.051970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.189 [2024-07-25 09:08:59.052017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.189 [2024-07-25 09:08:59.052038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.189 [2024-07-25 09:08:59.057206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.189 [2024-07-25 09:08:59.057256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.189 [2024-07-25 09:08:59.057275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.189 [2024-07-25 09:08:59.062305] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.189 [2024-07-25 09:08:59.062359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.189 [2024-07-25 09:08:59.062378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.189 [2024-07-25 09:08:59.067507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.189 [2024-07-25 09:08:59.067552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.189 [2024-07-25 09:08:59.067587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.189 [2024-07-25 09:08:59.072856] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.189 [2024-07-25 09:08:59.072919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.189 [2024-07-25 09:08:59.072940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.189 [2024-07-25 09:08:59.078172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.189 [2024-07-25 09:08:59.078222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.189 [2024-07-25 09:08:59.078242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.189 [2024-07-25 09:08:59.083509] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.189 [2024-07-25 09:08:59.083555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.189 [2024-07-25 09:08:59.083591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.190 [2024-07-25 09:08:59.088896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.190 [2024-07-25 09:08:59.088940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.190 [2024-07-25 09:08:59.088959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.190 [2024-07-25 09:08:59.094601] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.190 [2024-07-25 09:08:59.094656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.190 [2024-07-25 09:08:59.094684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.190 [2024-07-25 09:08:59.100323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.190 [2024-07-25 09:08:59.100380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.190 [2024-07-25 09:08:59.100416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.190 [2024-07-25 09:08:59.106285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.190 [2024-07-25 09:08:59.106354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.190 [2024-07-25 09:08:59.106383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.190 [2024-07-25 09:08:59.112429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.190 [2024-07-25 09:08:59.112500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.190 [2024-07-25 09:08:59.112522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.190 [2024-07-25 09:08:59.118638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.190 [2024-07-25 09:08:59.118706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.190 [2024-07-25 09:08:59.118726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.190 [2024-07-25 09:08:59.124663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.190 [2024-07-25 09:08:59.124719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.190 [2024-07-25 09:08:59.124739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.190 [2024-07-25 09:08:59.130789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.190 [2024-07-25 09:08:59.130859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.190 [2024-07-25 09:08:59.130879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.190 [2024-07-25 09:08:59.136372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.190 [2024-07-25 09:08:59.136428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.190 [2024-07-25 09:08:59.136448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.190 [2024-07-25 09:08:59.141837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.190 [2024-07-25 09:08:59.141895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.190 [2024-07-25 09:08:59.141915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.190 [2024-07-25 09:08:59.147247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.190 [2024-07-25 09:08:59.147300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.190 [2024-07-25 09:08:59.147321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.190 [2024-07-25 09:08:59.153098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.190 [2024-07-25 09:08:59.153144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.190 [2024-07-25 09:08:59.153163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.190 [2024-07-25 09:08:59.158650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.190 [2024-07-25 09:08:59.158695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.190 [2024-07-25 09:08:59.158715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.190 [2024-07-25 09:08:59.164165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.190 [2024-07-25 09:08:59.164228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.190 [2024-07-25 09:08:59.164249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.190 [2024-07-25 09:08:59.169974] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.190 [2024-07-25 09:08:59.170019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.190 [2024-07-25 09:08:59.170039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.190 [2024-07-25 09:08:59.175719] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.190 [2024-07-25 09:08:59.175767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.190 [2024-07-25 09:08:59.175788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.190 [2024-07-25 09:08:59.181312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.190 [2024-07-25 09:08:59.181361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.190 [2024-07-25 09:08:59.181381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.190 [2024-07-25 09:08:59.186889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.190 [2024-07-25 09:08:59.186936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.190 [2024-07-25 09:08:59.186957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.190 [2024-07-25 09:08:59.192562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.190 [2024-07-25 09:08:59.192609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.190 [2024-07-25 09:08:59.192629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.190 [2024-07-25 09:08:59.198313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.190 [2024-07-25 09:08:59.198361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.190 [2024-07-25 09:08:59.198381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.190 [2024-07-25 09:08:59.204033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.190 [2024-07-25 09:08:59.204081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.190 [2024-07-25 09:08:59.204101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.190 [2024-07-25 09:08:59.209799] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.190 [2024-07-25 09:08:59.209854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.190 [2024-07-25 09:08:59.209874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.190 [2024-07-25 09:08:59.215699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.190 [2024-07-25 09:08:59.215746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.190 [2024-07-25 09:08:59.215766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.190 [2024-07-25 09:08:59.221547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.190 [2024-07-25 09:08:59.221595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.191 [2024-07-25 09:08:59.221615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.191 [2024-07-25 09:08:59.227023] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.191 [2024-07-25 09:08:59.227069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.191 [2024-07-25 09:08:59.227089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.191 [2024-07-25 09:08:59.232558] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.191 [2024-07-25 09:08:59.232603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.191 [2024-07-25 09:08:59.232623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.191 [2024-07-25 09:08:59.238051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.191 [2024-07-25 09:08:59.238095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.191 [2024-07-25 09:08:59.238115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.191 [2024-07-25 09:08:59.243451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.191 [2024-07-25 09:08:59.243498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.191 [2024-07-25 09:08:59.243518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.191 [2024-07-25 09:08:59.248978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.191 [2024-07-25 09:08:59.249023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.191 [2024-07-25 09:08:59.249042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.191 [2024-07-25 09:08:59.254375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.191 [2024-07-25 09:08:59.254423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.191 [2024-07-25 09:08:59.254443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.191 [2024-07-25 09:08:59.260150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.191 [2024-07-25 09:08:59.260197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.191 [2024-07-25 09:08:59.260282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.191 [2024-07-25 09:08:59.265899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.191 [2024-07-25 09:08:59.265947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.191 [2024-07-25 09:08:59.265968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.191 [2024-07-25 09:08:59.271629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.191 [2024-07-25 09:08:59.271676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.191 [2024-07-25 09:08:59.271695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.191 [2024-07-25 09:08:59.277238] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.191 [2024-07-25 09:08:59.277284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.191 [2024-07-25 09:08:59.277304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.191 [2024-07-25 09:08:59.282744] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.191 [2024-07-25 09:08:59.282791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.191 [2024-07-25 09:08:59.282824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.191 [2024-07-25 09:08:59.288349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.191 [2024-07-25 09:08:59.288394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.191 [2024-07-25 09:08:59.288415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.191 [2024-07-25 09:08:59.293852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.191 [2024-07-25 09:08:59.293895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.191 [2024-07-25 09:08:59.293915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.191 [2024-07-25 09:08:59.299299] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.191 [2024-07-25 09:08:59.299345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.191 [2024-07-25 09:08:59.299364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.452 [2024-07-25 09:08:59.304935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.452 [2024-07-25 09:08:59.304981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.452 [2024-07-25 09:08:59.305000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.452 [2024-07-25 09:08:59.310633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.452 [2024-07-25 09:08:59.310682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.452 [2024-07-25 09:08:59.310704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.452 [2024-07-25 09:08:59.316051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.452 [2024-07-25 09:08:59.316098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.452 [2024-07-25 09:08:59.316118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.452 [2024-07-25 09:08:59.321524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.452 [2024-07-25 09:08:59.321572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.452 [2024-07-25 09:08:59.321593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.452 [2024-07-25 09:08:59.327176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.452 [2024-07-25 09:08:59.327222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.452 [2024-07-25 09:08:59.327242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.452 [2024-07-25 09:08:59.333014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.452 [2024-07-25 09:08:59.333062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.452 [2024-07-25 09:08:59.333082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.452 [2024-07-25 09:08:59.338681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.452 [2024-07-25 09:08:59.338729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.452 [2024-07-25 09:08:59.338750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.452 [2024-07-25 09:08:59.344373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.452 [2024-07-25 09:08:59.344431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.452 [2024-07-25 09:08:59.344452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.452 [2024-07-25 09:08:59.350101] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.452 [2024-07-25 09:08:59.350150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.452 [2024-07-25 09:08:59.350171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.452 [2024-07-25 09:08:59.356000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.452 [2024-07-25 09:08:59.356048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.452 [2024-07-25 09:08:59.356069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.452 [2024-07-25 09:08:59.361775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.452 [2024-07-25 09:08:59.361835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.452 [2024-07-25 09:08:59.361857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.452 [2024-07-25 09:08:59.367494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.452 [2024-07-25 09:08:59.367541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.452 [2024-07-25 09:08:59.367561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.452 [2024-07-25 09:08:59.373360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.452 [2024-07-25 09:08:59.373407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.452 [2024-07-25 09:08:59.373428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.452 [2024-07-25 09:08:59.379077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.452 [2024-07-25 09:08:59.379123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.453 [2024-07-25 09:08:59.379143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.453 [2024-07-25 09:08:59.384880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.453 [2024-07-25 09:08:59.384928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.453 [2024-07-25 09:08:59.384948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.453 [2024-07-25 09:08:59.390485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.453 [2024-07-25 09:08:59.390531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.453 [2024-07-25 09:08:59.390552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.453 [2024-07-25 09:08:59.396039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.453 [2024-07-25 09:08:59.396086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.453 [2024-07-25 09:08:59.396107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.453 [2024-07-25 09:08:59.401838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.453 [2024-07-25 09:08:59.401894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.453 [2024-07-25 09:08:59.401915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.453 [2024-07-25 09:08:59.407643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.453 [2024-07-25 09:08:59.407690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.453 [2024-07-25 09:08:59.407710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.453 [2024-07-25 09:08:59.413369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.453 [2024-07-25 09:08:59.413424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.453 [2024-07-25 09:08:59.413444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.453 [2024-07-25 09:08:59.418990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.453 [2024-07-25 09:08:59.419035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.453 [2024-07-25 09:08:59.419055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.453 [2024-07-25 09:08:59.424604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.453 [2024-07-25 09:08:59.424653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.453 [2024-07-25 09:08:59.424674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.453 [2024-07-25 09:08:59.430356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.453 [2024-07-25 09:08:59.430403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.453 [2024-07-25 09:08:59.430424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.453 [2024-07-25 09:08:59.435990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.453 [2024-07-25 09:08:59.436036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.453 [2024-07-25 09:08:59.436057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.453 [2024-07-25 09:08:59.441641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.453 [2024-07-25 09:08:59.441687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.453 [2024-07-25 09:08:59.441707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.453 [2024-07-25 09:08:59.447234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.453 [2024-07-25 09:08:59.447282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.453 [2024-07-25 09:08:59.447303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.453 [2024-07-25 09:08:59.452773] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.453 [2024-07-25 09:08:59.452832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.453 [2024-07-25 09:08:59.452854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.453 [2024-07-25 09:08:59.458410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.453 [2024-07-25 09:08:59.458457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.453 [2024-07-25 09:08:59.458478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.453 [2024-07-25 09:08:59.464116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.453 [2024-07-25 09:08:59.464163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.453 [2024-07-25 09:08:59.464183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.453 [2024-07-25 09:08:59.469746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.453 [2024-07-25 09:08:59.469791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.453 [2024-07-25 09:08:59.469823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.453 [2024-07-25 09:08:59.475134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.453 [2024-07-25 09:08:59.475179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.453 [2024-07-25 09:08:59.475198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.453 [2024-07-25 09:08:59.480610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.453 [2024-07-25 09:08:59.480655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.453 [2024-07-25 09:08:59.480675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.453 [2024-07-25 09:08:59.486209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.453 [2024-07-25 09:08:59.486255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.453 [2024-07-25 09:08:59.486275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.453 [2024-07-25 09:08:59.491913] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.453 [2024-07-25 09:08:59.491960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.453 [2024-07-25 09:08:59.491980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.453 [2024-07-25 09:08:59.497410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.453 [2024-07-25 09:08:59.497463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.453 [2024-07-25 09:08:59.497483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.453 [2024-07-25 09:08:59.502944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.453 [2024-07-25 09:08:59.503025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.453 [2024-07-25 09:08:59.503043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.453 [2024-07-25 09:08:59.508513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.453 [2024-07-25 09:08:59.508559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.453 [2024-07-25 09:08:59.508578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.453 [2024-07-25 09:08:59.513932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.453 [2024-07-25 09:08:59.513979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.453 [2024-07-25 09:08:59.513999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.454 [2024-07-25 09:08:59.519681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.454 [2024-07-25 09:08:59.519727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.454 [2024-07-25 09:08:59.519747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.454 [2024-07-25 09:08:59.525422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.454 [2024-07-25 09:08:59.525498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.454 [2024-07-25 09:08:59.525520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.454 [2024-07-25 09:08:59.531071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.454 [2024-07-25 09:08:59.531117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.454 [2024-07-25 09:08:59.531137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.454 [2024-07-25 09:08:59.536893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.454 [2024-07-25 09:08:59.536953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.454 [2024-07-25 09:08:59.536974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.454 [2024-07-25 09:08:59.542634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.454 [2024-07-25 09:08:59.542681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.454 [2024-07-25 09:08:59.542702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.454 [2024-07-25 09:08:59.548443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.454 [2024-07-25 09:08:59.548505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.454 [2024-07-25 09:08:59.548526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.454 [2024-07-25 09:08:59.554231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.454 [2024-07-25 09:08:59.554279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.454 [2024-07-25 09:08:59.554306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.454 [2024-07-25 09:08:59.560067] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.454 [2024-07-25 09:08:59.560114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.454 [2024-07-25 09:08:59.560135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.713 [2024-07-25 09:08:59.565755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.713 [2024-07-25 09:08:59.565803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.713 [2024-07-25 09:08:59.565869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.713 [2024-07-25 09:08:59.571615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.713 [2024-07-25 09:08:59.571660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.713 [2024-07-25 09:08:59.571679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.713 [2024-07-25 09:08:59.577155] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.713 [2024-07-25 09:08:59.577200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.714 [2024-07-25 09:08:59.577219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.714 [2024-07-25 09:08:59.582657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.714 [2024-07-25 09:08:59.582706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.714 [2024-07-25 09:08:59.582726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.714 [2024-07-25 09:08:59.588194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.714 [2024-07-25 09:08:59.588262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.714 [2024-07-25 09:08:59.588282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.714 [2024-07-25 09:08:59.593585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.714 [2024-07-25 09:08:59.593631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.714 [2024-07-25 09:08:59.593650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.714 [2024-07-25 09:08:59.599054] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.714 [2024-07-25 09:08:59.599100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.714 [2024-07-25 09:08:59.599136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.714 [2024-07-25 09:08:59.604701] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.714 [2024-07-25 09:08:59.604748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.714 [2024-07-25 09:08:59.604768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.714 [2024-07-25 09:08:59.610302] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.714 [2024-07-25 09:08:59.610354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.714 [2024-07-25 09:08:59.610373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.714 [2024-07-25 09:08:59.615728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.714 [2024-07-25 09:08:59.615773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.714 [2024-07-25 09:08:59.615793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.714 [2024-07-25 09:08:59.621081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.714 [2024-07-25 09:08:59.621125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.714 [2024-07-25 09:08:59.621144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.714 [2024-07-25 09:08:59.626373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.714 [2024-07-25 09:08:59.626419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.714 [2024-07-25 09:08:59.626438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.714 [2024-07-25 09:08:59.631635] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.714 [2024-07-25 09:08:59.631681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.714 [2024-07-25 09:08:59.631715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.714 [2024-07-25 09:08:59.636927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.714 [2024-07-25 09:08:59.636971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.714 [2024-07-25 09:08:59.636990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.714 [2024-07-25 09:08:59.642107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.714 [2024-07-25 09:08:59.642152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.714 [2024-07-25 09:08:59.642171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.714 [2024-07-25 09:08:59.647387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.714 [2024-07-25 09:08:59.647433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.714 [2024-07-25 09:08:59.647453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.714 [2024-07-25 09:08:59.652823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.714 [2024-07-25 09:08:59.652880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.714 [2024-07-25 09:08:59.652900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.714 [2024-07-25 09:08:59.658115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.714 [2024-07-25 09:08:59.658159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.714 [2024-07-25 09:08:59.658178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.714 [2024-07-25 09:08:59.663389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.714 [2024-07-25 09:08:59.663449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.714 [2024-07-25 09:08:59.663500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.714 [2024-07-25 09:08:59.669226] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.714 [2024-07-25 09:08:59.669274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.714 [2024-07-25 09:08:59.669295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.714 [2024-07-25 09:08:59.674629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.714 [2024-07-25 09:08:59.674675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.714 [2024-07-25 09:08:59.674695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.714 [2024-07-25 09:08:59.679787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.714 [2024-07-25 09:08:59.679843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.714 [2024-07-25 09:08:59.679863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.714 [2024-07-25 09:08:59.685155] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.714 [2024-07-25 09:08:59.685199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.714 [2024-07-25 09:08:59.685222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.714 [2024-07-25 09:08:59.690361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.715 [2024-07-25 09:08:59.690418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.715 [2024-07-25 09:08:59.690440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.715 [2024-07-25 09:08:59.696438] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.715 [2024-07-25 09:08:59.696485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.715 [2024-07-25 09:08:59.696520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.715 [2024-07-25 09:08:59.702107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.715 [2024-07-25 09:08:59.702169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.715 [2024-07-25 09:08:59.702188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.715 [2024-07-25 09:08:59.707681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.715 [2024-07-25 09:08:59.707726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.715 [2024-07-25 09:08:59.707745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.715 [2024-07-25 09:08:59.713555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.715 [2024-07-25 09:08:59.713601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.715 [2024-07-25 09:08:59.713620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.715 [2024-07-25 09:08:59.719529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.715 [2024-07-25 09:08:59.719577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.715 [2024-07-25 09:08:59.719598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.715 [2024-07-25 09:08:59.725416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.715 [2024-07-25 09:08:59.725462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.715 [2024-07-25 09:08:59.725482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.715 [2024-07-25 09:08:59.731249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.715 [2024-07-25 09:08:59.731296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.715 [2024-07-25 09:08:59.731316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.715 [2024-07-25 09:08:59.737032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.715 [2024-07-25 09:08:59.737079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.715 [2024-07-25 09:08:59.737098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.715 [2024-07-25 09:08:59.742664] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.715 [2024-07-25 09:08:59.742712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.715 [2024-07-25 09:08:59.742732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.715 [2024-07-25 09:08:59.748288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.715 [2024-07-25 09:08:59.748337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.715 [2024-07-25 09:08:59.748357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.715 [2024-07-25 09:08:59.754259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.715 [2024-07-25 09:08:59.754306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.715 [2024-07-25 09:08:59.754326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.715 [2024-07-25 09:08:59.760076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.715 [2024-07-25 09:08:59.760124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.715 [2024-07-25 09:08:59.760146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.715 [2024-07-25 09:08:59.765906] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.715 [2024-07-25 09:08:59.765951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.715 [2024-07-25 09:08:59.765970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.715 [2024-07-25 09:08:59.771585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.715 [2024-07-25 09:08:59.771634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.715 [2024-07-25 09:08:59.771655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.715 [2024-07-25 09:08:59.777380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.715 [2024-07-25 09:08:59.777427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.715 [2024-07-25 09:08:59.777448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.715 [2024-07-25 09:08:59.783243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.715 [2024-07-25 09:08:59.783292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.715 [2024-07-25 09:08:59.783313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.715 [2024-07-25 09:08:59.789049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.715 [2024-07-25 09:08:59.789097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.715 [2024-07-25 09:08:59.789119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.715 [2024-07-25 09:08:59.794672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.715 [2024-07-25 09:08:59.794719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.715 [2024-07-25 09:08:59.794739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.715 [2024-07-25 09:08:59.800238] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.715 [2024-07-25 09:08:59.800284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.715 [2024-07-25 09:08:59.800304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.715 [2024-07-25 09:08:59.805955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.715 [2024-07-25 09:08:59.806002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.715 [2024-07-25 09:08:59.806022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.715 [2024-07-25 09:08:59.811682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.715 [2024-07-25 09:08:59.811729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.715 [2024-07-25 09:08:59.811749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.715 [2024-07-25 09:08:59.817361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.715 [2024-07-25 09:08:59.817407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.715 [2024-07-25 09:08:59.817427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.715 [2024-07-25 09:08:59.822928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.715 [2024-07-25 09:08:59.822981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.716 [2024-07-25 09:08:59.823001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.974 [2024-07-25 09:08:59.828604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.974 [2024-07-25 09:08:59.828650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.974 [2024-07-25 09:08:59.828671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.974 [2024-07-25 09:08:59.834301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.974 [2024-07-25 09:08:59.834347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.974 [2024-07-25 09:08:59.834365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.974 [2024-07-25 09:08:59.839905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.974 [2024-07-25 09:08:59.839952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.974 [2024-07-25 09:08:59.839973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.974 [2024-07-25 09:08:59.845409] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.974 [2024-07-25 09:08:59.845457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.974 [2024-07-25 09:08:59.845478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.974 [2024-07-25 09:08:59.851090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.974 [2024-07-25 09:08:59.851152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.974 [2024-07-25 09:08:59.851173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.974 [2024-07-25 09:08:59.856644] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.974 [2024-07-25 09:08:59.856691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.974 [2024-07-25 09:08:59.856711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.974 [2024-07-25 09:08:59.862147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:26:52.974 [2024-07-25 09:08:59.862192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.974 [2024-07-25 09:08:59.862212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.974 00:26:52.974 Latency(us) 00:26:52.974 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:52.975 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:52.975 nvme0n1 : 2.00 5467.97 683.50 0.00 0.00 2921.17 2442.71 6196.13 00:26:52.975 =================================================================================================================== 00:26:52.975 Total : 5467.97 683.50 0.00 0.00 2921.17 2442.71 6196.13 00:26:52.975 0 00:26:52.975 09:08:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:52.975 09:08:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:52.975 09:08:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:52.975 09:08:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:52.975 | .driver_specific 00:26:52.975 | .nvme_error 00:26:52.975 | .status_code 00:26:52.975 | .command_transient_transport_error' 00:26:53.234 09:09:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 353 > 0 )) 00:26:53.234 09:09:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 86297 00:26:53.234 09:09:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 86297 ']' 00:26:53.234 09:09:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 86297 00:26:53.234 09:09:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:26:53.234 09:09:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:53.234 09:09:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86297 00:26:53.234 09:09:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:53.234 09:09:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:53.234 killing process with pid 86297 00:26:53.234 09:09:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86297' 00:26:53.234 09:09:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 86297 00:26:53.234 Received shutdown signal, test time was about 2.000000 seconds 00:26:53.234 00:26:53.234 Latency(us) 00:26:53.234 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:53.234 =================================================================================================================== 00:26:53.234 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:53.234 09:09:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 86297 00:26:54.654 09:09:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:26:54.654 09:09:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:54.654 09:09:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:54.654 09:09:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:54.654 09:09:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:54.654 09:09:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=86364 00:26:54.654 09:09:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 86364 /var/tmp/bperf.sock 00:26:54.654 09:09:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:26:54.654 09:09:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 86364 ']' 00:26:54.654 09:09:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:54.654 09:09:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:54.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:54.654 09:09:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:54.654 09:09:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:54.654 09:09:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:54.654 [2024-07-25 09:09:01.581799] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:26:54.654 [2024-07-25 09:09:01.582015] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86364 ] 00:26:54.654 [2024-07-25 09:09:01.757542] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:55.220 [2024-07-25 09:09:02.033387] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:55.220 [2024-07-25 09:09:02.259171] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:26:55.479 09:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:55.479 09:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:26:55.479 09:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:55.479 09:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:55.737 09:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:55.737 09:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.737 09:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:55.737 09:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.737 09:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:55.737 09:09:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:55.996 nvme0n1 00:26:55.996 09:09:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:55.996 09:09:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.996 09:09:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:55.996 09:09:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.996 09:09:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:55.996 09:09:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:56.255 Running I/O for 2 seconds... 00:26:56.255 [2024-07-25 09:09:03.218311] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fef90 00:26:56.255 [2024-07-25 09:09:03.221482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.255 [2024-07-25 09:09:03.221559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.255 [2024-07-25 09:09:03.238283] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195feb58 00:26:56.255 [2024-07-25 09:09:03.241422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.255 [2024-07-25 09:09:03.241479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:56.255 [2024-07-25 09:09:03.258900] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:26:56.255 [2024-07-25 09:09:03.262036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.255 [2024-07-25 09:09:03.262096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:56.255 [2024-07-25 09:09:03.279318] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:26:56.255 [2024-07-25 09:09:03.282502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.255 [2024-07-25 09:09:03.282575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:56.255 [2024-07-25 09:09:03.299719] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fd208 00:26:56.255 [2024-07-25 09:09:03.302863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.255 [2024-07-25 09:09:03.302920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:56.255 [2024-07-25 09:09:03.320623] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fc998 00:26:56.255 [2024-07-25 09:09:03.324069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:15500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.255 [2024-07-25 09:09:03.324155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:56.255 [2024-07-25 09:09:03.344426] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fc128 00:26:56.255 [2024-07-25 09:09:03.347741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.255 [2024-07-25 09:09:03.347852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:56.255 [2024-07-25 09:09:03.367988] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fb8b8 00:26:56.514 [2024-07-25 09:09:03.371279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.514 [2024-07-25 09:09:03.371388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:56.514 [2024-07-25 09:09:03.391658] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fb048 00:26:56.514 [2024-07-25 09:09:03.395097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:21139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.514 [2024-07-25 09:09:03.395238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:56.514 [2024-07-25 09:09:03.415374] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fa7d8 00:26:56.514 [2024-07-25 09:09:03.418624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.514 [2024-07-25 09:09:03.418754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:56.514 [2024-07-25 09:09:03.438707] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f9f68 00:26:56.514 [2024-07-25 09:09:03.441963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:11365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.514 [2024-07-25 09:09:03.442056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:56.514 [2024-07-25 09:09:03.460650] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f96f8 00:26:56.514 [2024-07-25 09:09:03.463575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:7425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.514 [2024-07-25 09:09:03.463643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:56.514 [2024-07-25 09:09:03.481257] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f8e88 00:26:56.514 [2024-07-25 09:09:03.484190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:1178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.514 [2024-07-25 09:09:03.484277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:56.514 [2024-07-25 09:09:03.501584] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f8618 00:26:56.514 [2024-07-25 09:09:03.504464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:4840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.514 [2024-07-25 09:09:03.504524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:56.514 [2024-07-25 09:09:03.521559] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f7da8 00:26:56.514 [2024-07-25 09:09:03.524414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:25133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.514 [2024-07-25 09:09:03.524471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:56.514 [2024-07-25 09:09:03.541775] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f7538 00:26:56.514 [2024-07-25 09:09:03.544719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:16460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.514 [2024-07-25 09:09:03.544779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:56.514 [2024-07-25 09:09:03.562465] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f6cc8 00:26:56.514 [2024-07-25 09:09:03.565432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:21011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.514 [2024-07-25 09:09:03.565495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:56.514 [2024-07-25 09:09:03.584450] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f6458 00:26:56.514 [2024-07-25 09:09:03.587334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:15625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.514 [2024-07-25 09:09:03.587401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:56.514 [2024-07-25 09:09:03.606160] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f5be8 00:26:56.514 [2024-07-25 09:09:03.609038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:9005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.514 [2024-07-25 09:09:03.609096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:56.514 [2024-07-25 09:09:03.627255] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f5378 00:26:56.773 [2024-07-25 09:09:03.630077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:24317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.773 [2024-07-25 09:09:03.630137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:56.774 [2024-07-25 09:09:03.648594] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f4b08 00:26:56.774 [2024-07-25 09:09:03.651283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:2356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.774 [2024-07-25 09:09:03.651346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:56.774 [2024-07-25 09:09:03.669693] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f4298 00:26:56.774 [2024-07-25 09:09:03.672514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:5247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.774 [2024-07-25 09:09:03.672580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:56.774 [2024-07-25 09:09:03.691219] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f3a28 00:26:56.774 [2024-07-25 09:09:03.693988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:24241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.774 [2024-07-25 09:09:03.694056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:56.774 [2024-07-25 09:09:03.711775] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f31b8 00:26:56.774 [2024-07-25 09:09:03.714371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:5569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.774 [2024-07-25 09:09:03.714427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:56.774 [2024-07-25 09:09:03.732376] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f2948 00:26:56.774 [2024-07-25 09:09:03.734937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:17893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.774 [2024-07-25 09:09:03.735022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:56.774 [2024-07-25 09:09:03.752242] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f20d8 00:26:56.774 [2024-07-25 09:09:03.754688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:16717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.774 [2024-07-25 09:09:03.754742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:56.774 [2024-07-25 09:09:03.771682] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f1868 00:26:56.774 [2024-07-25 09:09:03.774173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:19644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.774 [2024-07-25 09:09:03.774227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:56.774 [2024-07-25 09:09:03.791057] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f0ff8 00:26:56.774 [2024-07-25 09:09:03.793513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:14192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.774 [2024-07-25 09:09:03.793566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:56.774 [2024-07-25 09:09:03.809820] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f0788 00:26:56.774 [2024-07-25 09:09:03.812269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:12221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.774 [2024-07-25 09:09:03.812321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:56.774 [2024-07-25 09:09:03.828890] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eff18 00:26:56.774 [2024-07-25 09:09:03.831409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:14505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.774 [2024-07-25 09:09:03.831463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:56.774 [2024-07-25 09:09:03.849488] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ef6a8 00:26:56.774 [2024-07-25 09:09:03.852037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:2043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.774 [2024-07-25 09:09:03.852094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:56.774 [2024-07-25 09:09:03.870228] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eee38 00:26:56.774 [2024-07-25 09:09:03.872668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:20248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.774 [2024-07-25 09:09:03.872737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:57.033 [2024-07-25 09:09:03.891308] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ee5c8 00:26:57.033 [2024-07-25 09:09:03.893757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:14340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.033 [2024-07-25 09:09:03.893830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:57.033 [2024-07-25 09:09:03.910604] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195edd58 00:26:57.033 [2024-07-25 09:09:03.912897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:14068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.033 [2024-07-25 09:09:03.912950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:57.033 [2024-07-25 09:09:03.929888] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ed4e8 00:26:57.033 [2024-07-25 09:09:03.932402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:23503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.033 [2024-07-25 09:09:03.932458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:57.033 [2024-07-25 09:09:03.950422] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ecc78 00:26:57.033 [2024-07-25 09:09:03.952668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.033 [2024-07-25 09:09:03.952717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:57.033 [2024-07-25 09:09:03.969174] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ec408 00:26:57.033 [2024-07-25 09:09:03.971538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:20400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.033 [2024-07-25 09:09:03.971990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:57.033 [2024-07-25 09:09:03.990587] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ebb98 00:26:57.033 [2024-07-25 09:09:03.993052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:19907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.033 [2024-07-25 09:09:03.993385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:57.033 [2024-07-25 09:09:04.011607] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eb328 00:26:57.033 [2024-07-25 09:09:04.014092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:12871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.033 [2024-07-25 09:09:04.014384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:57.033 [2024-07-25 09:09:04.032457] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eaab8 00:26:57.033 [2024-07-25 09:09:04.034958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:18800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.033 [2024-07-25 09:09:04.035213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:57.033 [2024-07-25 09:09:04.053240] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ea248 00:26:57.033 [2024-07-25 09:09:04.055265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:19820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.033 [2024-07-25 09:09:04.055325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:57.033 [2024-07-25 09:09:04.073528] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e99d8 00:26:57.033 [2024-07-25 09:09:04.075637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:15833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.033 [2024-07-25 09:09:04.075706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:57.033 [2024-07-25 09:09:04.095178] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e9168 00:26:57.033 [2024-07-25 09:09:04.097446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:10787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.033 [2024-07-25 09:09:04.097512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:57.033 [2024-07-25 09:09:04.117377] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e88f8 00:26:57.033 [2024-07-25 09:09:04.119475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:9510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.033 [2024-07-25 09:09:04.119554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:57.034 [2024-07-25 09:09:04.138194] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e8088 00:26:57.034 [2024-07-25 09:09:04.140303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:2956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.034 [2024-07-25 09:09:04.140379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:57.292 [2024-07-25 09:09:04.159759] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e7818 00:26:57.292 [2024-07-25 09:09:04.161865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:22826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.292 [2024-07-25 09:09:04.161959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:57.292 [2024-07-25 09:09:04.180899] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e6fa8 00:26:57.292 [2024-07-25 09:09:04.182936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:10450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.292 [2024-07-25 09:09:04.183002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:57.292 [2024-07-25 09:09:04.202942] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e6738 00:26:57.292 [2024-07-25 09:09:04.205079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.292 [2024-07-25 09:09:04.205144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:57.292 [2024-07-25 09:09:04.223574] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5ec8 00:26:57.292 [2024-07-25 09:09:04.225553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:7705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.292 [2024-07-25 09:09:04.225644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:57.292 [2024-07-25 09:09:04.244789] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5658 00:26:57.292 [2024-07-25 09:09:04.246796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.292 [2024-07-25 09:09:04.246891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:57.292 [2024-07-25 09:09:04.265181] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e4de8 00:26:57.292 [2024-07-25 09:09:04.266961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:10355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.293 [2024-07-25 09:09:04.267042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:57.293 [2024-07-25 09:09:04.285335] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e4578 00:26:57.293 [2024-07-25 09:09:04.287416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:21153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.293 [2024-07-25 09:09:04.287495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:57.293 [2024-07-25 09:09:04.306880] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e3d08 00:26:57.293 [2024-07-25 09:09:04.308903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:15563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.293 [2024-07-25 09:09:04.308989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:57.293 [2024-07-25 09:09:04.328176] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e3498 00:26:57.293 [2024-07-25 09:09:04.330181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:15163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.293 [2024-07-25 09:09:04.330272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:57.293 [2024-07-25 09:09:04.350212] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e2c28 00:26:57.293 [2024-07-25 09:09:04.352306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:24022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.293 [2024-07-25 09:09:04.352387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:57.293 [2024-07-25 09:09:04.372161] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e23b8 00:26:57.293 [2024-07-25 09:09:04.374016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:3213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.293 [2024-07-25 09:09:04.374076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:57.293 [2024-07-25 09:09:04.393622] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e1b48 00:26:57.293 [2024-07-25 09:09:04.395408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.293 [2024-07-25 09:09:04.395470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:57.552 [2024-07-25 09:09:04.415097] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e12d8 00:26:57.552 [2024-07-25 09:09:04.417026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:15825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.552 [2024-07-25 09:09:04.417111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:57.552 [2024-07-25 09:09:04.436976] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e0a68 00:26:57.552 [2024-07-25 09:09:04.438770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:25211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.552 [2024-07-25 09:09:04.438867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:57.552 [2024-07-25 09:09:04.458851] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e01f8 00:26:57.552 [2024-07-25 09:09:04.460736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:4866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.552 [2024-07-25 09:09:04.460821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:57.552 [2024-07-25 09:09:04.479828] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195df988 00:26:57.552 [2024-07-25 09:09:04.481664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.552 [2024-07-25 09:09:04.481745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:57.552 [2024-07-25 09:09:04.500973] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195df118 00:26:57.552 [2024-07-25 09:09:04.502638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:18256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.552 [2024-07-25 09:09:04.502707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:57.552 [2024-07-25 09:09:04.521540] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195de8a8 00:26:57.552 [2024-07-25 09:09:04.523198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.552 [2024-07-25 09:09:04.523279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:57.552 [2024-07-25 09:09:04.542493] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195de038 00:26:57.552 [2024-07-25 09:09:04.544156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.552 [2024-07-25 09:09:04.544235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:57.552 [2024-07-25 09:09:04.572610] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195de038 00:26:57.552 [2024-07-25 09:09:04.575901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:22579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.552 [2024-07-25 09:09:04.575972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.552 [2024-07-25 09:09:04.593373] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195de8a8 00:26:57.552 [2024-07-25 09:09:04.596652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:22939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.552 [2024-07-25 09:09:04.596715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:57.552 [2024-07-25 09:09:04.614732] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195df118 00:26:57.552 [2024-07-25 09:09:04.617915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:8952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.552 [2024-07-25 09:09:04.617982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:57.552 [2024-07-25 09:09:04.636512] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195df988 00:26:57.552 [2024-07-25 09:09:04.639654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:14822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.552 [2024-07-25 09:09:04.639726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:57.552 [2024-07-25 09:09:04.658511] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e01f8 00:26:57.552 [2024-07-25 09:09:04.662009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:20577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.552 [2024-07-25 09:09:04.662098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:57.812 [2024-07-25 09:09:04.680606] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e0a68 00:26:57.812 [2024-07-25 09:09:04.683689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:3014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.812 [2024-07-25 09:09:04.683757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:57.812 [2024-07-25 09:09:04.702057] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e12d8 00:26:57.812 [2024-07-25 09:09:04.705264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:3902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.812 [2024-07-25 09:09:04.705348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:57.812 [2024-07-25 09:09:04.723552] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e1b48 00:26:57.812 [2024-07-25 09:09:04.726641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:2042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.812 [2024-07-25 09:09:04.726712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:57.812 [2024-07-25 09:09:04.744382] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e23b8 00:26:57.812 [2024-07-25 09:09:04.747380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.812 [2024-07-25 09:09:04.747441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:57.812 [2024-07-25 09:09:04.765955] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e2c28 00:26:57.812 [2024-07-25 09:09:04.769018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:14355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.812 [2024-07-25 09:09:04.769095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:57.812 [2024-07-25 09:09:04.787373] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e3498 00:26:57.812 [2024-07-25 09:09:04.790378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:14244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.812 [2024-07-25 09:09:04.790449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:57.812 [2024-07-25 09:09:04.808572] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e3d08 00:26:57.812 [2024-07-25 09:09:04.811465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:25061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.813 [2024-07-25 09:09:04.811528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:57.813 [2024-07-25 09:09:04.829585] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e4578 00:26:57.813 [2024-07-25 09:09:04.832549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.813 [2024-07-25 09:09:04.832621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:57.813 [2024-07-25 09:09:04.851153] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e4de8 00:26:57.813 [2024-07-25 09:09:04.854215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:1528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.813 [2024-07-25 09:09:04.854273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:57.813 [2024-07-25 09:09:04.873228] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5658 00:26:57.813 [2024-07-25 09:09:04.876143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:10009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.813 [2024-07-25 09:09:04.876213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:57.813 [2024-07-25 09:09:04.894677] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5ec8 00:26:57.813 [2024-07-25 09:09:04.897506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:4340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.813 [2024-07-25 09:09:04.897597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:57.813 [2024-07-25 09:09:04.916748] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e6738 00:26:57.813 [2024-07-25 09:09:04.919679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:9778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.813 [2024-07-25 09:09:04.919785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:58.072 [2024-07-25 09:09:04.938429] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e6fa8 00:26:58.072 [2024-07-25 09:09:04.941242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:6167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.072 [2024-07-25 09:09:04.941312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:58.072 [2024-07-25 09:09:04.959285] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e7818 00:26:58.072 [2024-07-25 09:09:04.962078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:18748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.072 [2024-07-25 09:09:04.962165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:58.072 [2024-07-25 09:09:04.980741] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e8088 00:26:58.072 [2024-07-25 09:09:04.983433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:23712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.072 [2024-07-25 09:09:04.983492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:58.072 [2024-07-25 09:09:05.001496] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e88f8 00:26:58.072 [2024-07-25 09:09:05.004185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:18003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.072 [2024-07-25 09:09:05.004257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:58.072 [2024-07-25 09:09:05.022230] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e9168 00:26:58.072 [2024-07-25 09:09:05.024774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:17987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.072 [2024-07-25 09:09:05.024861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:58.072 [2024-07-25 09:09:05.042989] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e99d8 00:26:58.072 [2024-07-25 09:09:05.045583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:17502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.072 [2024-07-25 09:09:05.045650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:58.072 [2024-07-25 09:09:05.063273] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ea248 00:26:58.072 [2024-07-25 09:09:05.065796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:24726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.072 [2024-07-25 09:09:05.065867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:58.072 [2024-07-25 09:09:05.083445] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eaab8 00:26:58.072 [2024-07-25 09:09:05.086022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:2045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.072 [2024-07-25 09:09:05.086076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:58.072 [2024-07-25 09:09:05.104341] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eb328 00:26:58.072 [2024-07-25 09:09:05.106809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:24448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.072 [2024-07-25 09:09:05.106872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:58.072 [2024-07-25 09:09:05.124271] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ebb98 00:26:58.072 [2024-07-25 09:09:05.126941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.072 [2024-07-25 09:09:05.126994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:58.072 [2024-07-25 09:09:05.145527] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ec408 00:26:58.072 [2024-07-25 09:09:05.148090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:24239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.072 [2024-07-25 09:09:05.148160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:58.072 [2024-07-25 09:09:05.165908] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ecc78 00:26:58.072 [2024-07-25 09:09:05.168312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.072 [2024-07-25 09:09:05.168383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:58.332 [2024-07-25 09:09:05.187274] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ed4e8 00:26:58.332 [2024-07-25 09:09:05.189856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:58.332 [2024-07-25 09:09:05.189903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:58.332 00:26:58.332 Latency(us) 00:26:58.332 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:58.332 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:58.332 nvme0n1 : 2.01 11979.38 46.79 0.00 0.00 10674.08 8996.31 38368.35 00:26:58.332 =================================================================================================================== 00:26:58.332 Total : 11979.38 46.79 0.00 0.00 10674.08 8996.31 38368.35 00:26:58.332 0 00:26:58.332 09:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:58.332 09:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:58.332 09:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:58.332 | .driver_specific 00:26:58.332 | .nvme_error 00:26:58.332 | .status_code 00:26:58.332 | .command_transient_transport_error' 00:26:58.332 09:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:58.615 09:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 94 > 0 )) 00:26:58.615 09:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 86364 00:26:58.615 09:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 86364 ']' 00:26:58.615 09:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 86364 00:26:58.615 09:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:26:58.615 09:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:58.615 09:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86364 00:26:58.615 killing process with pid 86364 00:26:58.615 Received shutdown signal, test time was about 2.000000 seconds 00:26:58.615 00:26:58.615 Latency(us) 00:26:58.615 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:58.615 =================================================================================================================== 00:26:58.615 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:58.615 09:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:58.615 09:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:58.615 09:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86364' 00:26:58.615 09:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 86364 00:26:58.615 09:09:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 86364 00:26:59.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:59.553 09:09:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:26:59.553 09:09:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:59.553 09:09:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:59.553 09:09:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:59.553 09:09:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:59.553 09:09:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=86431 00:26:59.553 09:09:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:26:59.553 09:09:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 86431 /var/tmp/bperf.sock 00:26:59.553 09:09:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 86431 ']' 00:26:59.553 09:09:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:59.553 09:09:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:59.553 09:09:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:59.553 09:09:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:59.553 09:09:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:59.811 [2024-07-25 09:09:06.667317] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:26:59.811 [2024-07-25 09:09:06.667709] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86431 ] 00:26:59.811 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:59.811 Zero copy mechanism will not be used. 00:26:59.811 [2024-07-25 09:09:06.845385] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:00.107 [2024-07-25 09:09:07.092733] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:00.365 [2024-07-25 09:09:07.300605] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:27:00.624 09:09:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:00.624 09:09:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:27:00.624 09:09:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:00.624 09:09:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:00.883 09:09:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:00.883 09:09:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.883 09:09:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:00.883 09:09:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.883 09:09:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:00.883 09:09:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:01.142 nvme0n1 00:27:01.142 09:09:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:01.142 09:09:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.142 09:09:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:01.142 09:09:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.142 09:09:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:01.142 09:09:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:01.401 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:01.401 Zero copy mechanism will not be used. 00:27:01.401 Running I/O for 2 seconds... 00:27:01.401 [2024-07-25 09:09:08.341263] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:01.401 [2024-07-25 09:09:08.341694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.401 [2024-07-25 09:09:08.341752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.401 [2024-07-25 09:09:08.349742] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:01.401 [2024-07-25 09:09:08.350189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.401 [2024-07-25 09:09:08.350238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.401 [2024-07-25 09:09:08.357442] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:01.401 [2024-07-25 09:09:08.357812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.401 [2024-07-25 09:09:08.357876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.401 [2024-07-25 09:09:08.365131] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:01.401 [2024-07-25 09:09:08.365537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.401 [2024-07-25 09:09:08.365580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.401 [2024-07-25 09:09:08.372856] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:01.401 [2024-07-25 09:09:08.373265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.402 [2024-07-25 09:09:08.373311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.402 [2024-07-25 09:09:08.380686] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:01.402 [2024-07-25 09:09:08.381091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.402 [2024-07-25 09:09:08.381142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.402 [2024-07-25 09:09:08.389400] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:01.402 [2024-07-25 09:09:08.389802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.402 [2024-07-25 09:09:08.389855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.402 [2024-07-25 09:09:08.397147] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:01.402 [2024-07-25 09:09:08.397506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.402 [2024-07-25 09:09:08.397557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.402 [2024-07-25 09:09:08.404808] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:01.402 [2024-07-25 09:09:08.405244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.402 [2024-07-25 09:09:08.405286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.402 [2024-07-25 09:09:08.412493] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:01.402 [2024-07-25 09:09:08.412867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.402 [2024-07-25 09:09:08.412924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.402 [2024-07-25 09:09:08.419998] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:01.402 [2024-07-25 09:09:08.420352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.402 [2024-07-25 09:09:08.420408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.402 [2024-07-25 09:09:08.427360] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:01.402 [2024-07-25 09:09:08.427735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.402 [2024-07-25 09:09:08.427778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.402 [2024-07-25 09:09:08.434575] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:01.402 [2024-07-25 09:09:08.434960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.402 [2024-07-25 09:09:08.435002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.402 [2024-07-25 09:09:08.441766] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:01.402 [2024-07-25 09:09:08.442141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.402 [2024-07-25 09:09:08.442204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.402 [2024-07-25 09:09:08.449107] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:01.402 [2024-07-25 09:09:08.449478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.402 [2024-07-25 09:09:08.449520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.402 [2024-07-25 09:09:08.456378] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:01.402 [2024-07-25 09:09:08.456736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.402 [2024-07-25 09:09:08.456788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.402 [2024-07-25 09:09:08.463661] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:01.402 [2024-07-25 09:09:08.464068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.402 [2024-07-25 09:09:08.464119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.402 [2024-07-25 09:09:08.471019] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:01.402 [2024-07-25 09:09:08.471390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.402 [2024-07-25 09:09:08.471432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.402 [2024-07-25 09:09:08.478344] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:01.402 [2024-07-25 09:09:08.478701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.402 [2024-07-25 09:09:08.478753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.402 [2024-07-25 09:09:08.485627] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:01.402 [2024-07-25 09:09:08.486013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.402 [2024-07-25 09:09:08.486055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.402 [2024-07-25 09:09:08.492874] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:01.402 [2024-07-25 09:09:08.493247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.402 [2024-07-25 09:09:08.493288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.402 [2024-07-25 09:09:08.500102] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:01.402 [2024-07-25 09:09:08.500463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.402 [2024-07-25 09:09:08.500514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.402 [2024-07-25 09:09:08.507476] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:01.402 [2024-07-25 09:09:08.507857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.402 [2024-07-25 09:09:08.507938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.661 [2024-07-25 09:09:08.515006] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:01.661 [2024-07-25 09:09:08.515391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.661 [2024-07-25 09:09:08.515432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.661 [2024-07-25 09:09:08.522313] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:01.661 [2024-07-25 09:09:08.522673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.661 [2024-07-25 09:09:08.522729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.661 [2024-07-25 09:09:08.529615] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:01.661 [2024-07-25 09:09:08.530035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.661 [2024-07-25 09:09:08.530082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.661 [2024-07-25 09:09:08.537083] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:01.661 [2024-07-25 09:09:08.537432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.661 [2024-07-25 09:09:08.537480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.661 [2024-07-25 09:09:08.544514] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:01.661 [2024-07-25 09:09:08.544926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.661 [2024-07-25 09:09:08.544976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.661 [2024-07-25 09:09:08.551928] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:01.661 [2024-07-25 09:09:08.552318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.661 [2024-07-25 09:09:08.552360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.661 [2024-07-25 09:09:08.559272] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:01.661 [2024-07-25 09:09:08.559631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.661 [2024-07-25 09:09:08.559691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.661 [2024-07-25 09:09:08.566500] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:01.661 [2024-07-25 09:09:08.566906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.661 [2024-07-25 09:09:08.566948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.661 [2024-07-25 09:09:08.573854] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:01.661 [2024-07-25 09:09:08.574222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.661 [2024-07-25 09:09:08.574263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.661 [2024-07-25 09:09:08.581514] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:01.661 [2024-07-25 09:09:08.581892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.661 [2024-07-25 09:09:08.581942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.661 [2024-07-25 09:09:08.589324] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:01.661 [2024-07-25 09:09:08.589706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.661 [2024-07-25 09:09:08.589748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.661 [2024-07-25 09:09:08.597072] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:01.661 [2024-07-25 09:09:08.597444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.661 [2024-07-25 09:09:08.597493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.661 [2024-07-25 09:09:08.604729] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:01.661 [2024-07-25 09:09:08.605110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.661 [2024-07-25 09:09:08.605170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.661 [2024-07-25 09:09:08.612426] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:01.661 [2024-07-25 09:09:08.612843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.661 [2024-07-25 09:09:08.612884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.661 [2024-07-25 09:09:08.620526] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:01.661 [2024-07-25 09:09:08.620931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.661 [2024-07-25 09:09:08.620981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.661 [2024-07-25 09:09:08.628433] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:01.661 [2024-07-25 09:09:08.628804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.661 [2024-07-25 09:09:08.628859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.661 [2024-07-25 09:09:08.638656] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:01.661 [2024-07-25 09:09:08.639065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.661 [2024-07-25 09:09:08.639133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.661 [2024-07-25 09:09:08.647508] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:01.661 [2024-07-25 09:09:08.647935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.661 [2024-07-25 09:09:08.647986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.661 [2024-07-25 09:09:08.658345] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:01.661 [2024-07-25 09:09:08.658774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.661 [2024-07-25 09:09:08.658836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.661 [2024-07-25 09:09:08.669250] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:01.661 [2024-07-25 09:09:08.669654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.661 [2024-07-25 09:09:08.669695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.661 [2024-07-25 09:09:08.680142] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:01.661 [2024-07-25 09:09:08.680548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.661 [2024-07-25 09:09:08.680601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.661 [2024-07-25 09:09:08.690800] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:01.661 [2024-07-25 09:09:08.691229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.661 [2024-07-25 09:09:08.691280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.661 [2024-07-25 09:09:08.701338] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:01.661 [2024-07-25 09:09:08.701722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.661 [2024-07-25 09:09:08.701774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.662 [2024-07-25 09:09:08.711703] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:01.662 [2024-07-25 09:09:08.712154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.662 [2024-07-25 09:09:08.712201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.662 [2024-07-25 09:09:08.722341] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:01.662 [2024-07-25 09:09:08.722745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.662 [2024-07-25 09:09:08.722796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.662 [2024-07-25 09:09:08.732867] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:01.662 [2024-07-25 09:09:08.733286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.662 [2024-07-25 09:09:08.733339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.662 [2024-07-25 09:09:08.743625] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:01.662 [2024-07-25 09:09:08.744055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.662 [2024-07-25 09:09:08.744115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.662 [2024-07-25 09:09:08.753586] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:01.662 [2024-07-25 09:09:08.753991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.662 [2024-07-25 09:09:08.754040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.662 [2024-07-25 09:09:08.761028] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:01.662 [2024-07-25 09:09:08.761411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.662 [2024-07-25 09:09:08.761468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.662 [2024-07-25 09:09:08.769026] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:01.662 [2024-07-25 09:09:08.769391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.662 [2024-07-25 09:09:08.769434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.921 [2024-07-25 09:09:08.776686] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:01.921 [2024-07-25 09:09:08.777085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.921 [2024-07-25 09:09:08.777144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.921 [2024-07-25 09:09:08.784484] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:01.921 [2024-07-25 09:09:08.784879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.921 [2024-07-25 09:09:08.784935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.921 [2024-07-25 09:09:08.791778] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:01.921 [2024-07-25 09:09:08.792204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.921 [2024-07-25 09:09:08.792251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.921 [2024-07-25 09:09:08.799010] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:01.921 [2024-07-25 09:09:08.799371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.921 [2024-07-25 09:09:08.799422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.921 [2024-07-25 09:09:08.806462] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:01.921 [2024-07-25 09:09:08.806818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.921 [2024-07-25 09:09:08.806889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.921 [2024-07-25 09:09:08.814003] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:01.921 [2024-07-25 09:09:08.814382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.921 [2024-07-25 09:09:08.814424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.921 [2024-07-25 09:09:08.821404] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:01.921 [2024-07-25 09:09:08.821772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.921 [2024-07-25 09:09:08.821837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.921 [2024-07-25 09:09:08.828769] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:01.921 [2024-07-25 09:09:08.829153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.921 [2024-07-25 09:09:08.829200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.921 [2024-07-25 09:09:08.836151] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:01.921 [2024-07-25 09:09:08.836548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.921 [2024-07-25 09:09:08.836599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.921 [2024-07-25 09:09:08.843742] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:01.921 [2024-07-25 09:09:08.844144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.921 [2024-07-25 09:09:08.844219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.921 [2024-07-25 09:09:08.851113] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:01.921 [2024-07-25 09:09:08.851481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.921 [2024-07-25 09:09:08.851523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.921 [2024-07-25 09:09:08.858437] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:01.921 [2024-07-25 09:09:08.858886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.921 [2024-07-25 09:09:08.858935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.921 [2024-07-25 09:09:08.865913] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:01.921 [2024-07-25 09:09:08.866276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.921 [2024-07-25 09:09:08.866327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.921 [2024-07-25 09:09:08.873230] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:01.921 [2024-07-25 09:09:08.873592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.921 [2024-07-25 09:09:08.873634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.921 [2024-07-25 09:09:08.880772] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:01.921 [2024-07-25 09:09:08.881144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.921 [2024-07-25 09:09:08.881199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.921 [2024-07-25 09:09:08.888203] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:01.921 [2024-07-25 09:09:08.888594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.921 [2024-07-25 09:09:08.888636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.921 [2024-07-25 09:09:08.895326] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:01.921 [2024-07-25 09:09:08.895702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.921 [2024-07-25 09:09:08.895744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.921 [2024-07-25 09:09:08.902760] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:01.921 [2024-07-25 09:09:08.903189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.921 [2024-07-25 09:09:08.903241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.921 [2024-07-25 09:09:08.910300] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:01.921 [2024-07-25 09:09:08.910700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.921 [2024-07-25 09:09:08.910742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.921 [2024-07-25 09:09:08.917696] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:01.921 [2024-07-25 09:09:08.918065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.921 [2024-07-25 09:09:08.918118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.921 [2024-07-25 09:09:08.924971] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:01.921 [2024-07-25 09:09:08.925359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.921 [2024-07-25 09:09:08.925410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.921 [2024-07-25 09:09:08.932406] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:01.921 [2024-07-25 09:09:08.932809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.921 [2024-07-25 09:09:08.932865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.921 [2024-07-25 09:09:08.940216] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:01.922 [2024-07-25 09:09:08.940609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.922 [2024-07-25 09:09:08.940659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.922 [2024-07-25 09:09:08.947609] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:01.922 [2024-07-25 09:09:08.948013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.922 [2024-07-25 09:09:08.948055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.922 [2024-07-25 09:09:08.955212] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:01.922 [2024-07-25 09:09:08.955604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.922 [2024-07-25 09:09:08.955645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.922 [2024-07-25 09:09:08.962803] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:01.922 [2024-07-25 09:09:08.963174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.922 [2024-07-25 09:09:08.963231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.922 [2024-07-25 09:09:08.970360] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:01.922 [2024-07-25 09:09:08.970761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.922 [2024-07-25 09:09:08.970801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.922 [2024-07-25 09:09:08.977978] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:01.922 [2024-07-25 09:09:08.978339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.922 [2024-07-25 09:09:08.978390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.922 [2024-07-25 09:09:08.985537] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:01.922 [2024-07-25 09:09:08.985945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.922 [2024-07-25 09:09:08.985996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.922 [2024-07-25 09:09:08.993333] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:01.922 [2024-07-25 09:09:08.993701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.922 [2024-07-25 09:09:08.993758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.922 [2024-07-25 09:09:09.000908] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:01.922 [2024-07-25 09:09:09.001300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.922 [2024-07-25 09:09:09.001350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.922 [2024-07-25 09:09:09.008427] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:01.922 [2024-07-25 09:09:09.008820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.922 [2024-07-25 09:09:09.008873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.922 [2024-07-25 09:09:09.016056] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:01.922 [2024-07-25 09:09:09.016428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.922 [2024-07-25 09:09:09.016467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.922 [2024-07-25 09:09:09.023518] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:01.922 [2024-07-25 09:09:09.023944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.922 [2024-07-25 09:09:09.023997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.922 [2024-07-25 09:09:09.030849] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:01.922 [2024-07-25 09:09:09.031246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.922 [2024-07-25 09:09:09.031287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.184 [2024-07-25 09:09:09.038199] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.184 [2024-07-25 09:09:09.038594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.184 [2024-07-25 09:09:09.038641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.184 [2024-07-25 09:09:09.045644] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.184 [2024-07-25 09:09:09.046022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.184 [2024-07-25 09:09:09.046078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.184 [2024-07-25 09:09:09.053026] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.184 [2024-07-25 09:09:09.053399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.184 [2024-07-25 09:09:09.053440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.184 [2024-07-25 09:09:09.060457] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.184 [2024-07-25 09:09:09.060820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.184 [2024-07-25 09:09:09.060883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.184 [2024-07-25 09:09:09.067811] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.184 [2024-07-25 09:09:09.068207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.184 [2024-07-25 09:09:09.068270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.184 [2024-07-25 09:09:09.074915] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.184 [2024-07-25 09:09:09.075289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.184 [2024-07-25 09:09:09.075330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.184 [2024-07-25 09:09:09.082268] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.184 [2024-07-25 09:09:09.082614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.184 [2024-07-25 09:09:09.082670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.184 [2024-07-25 09:09:09.089900] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.184 [2024-07-25 09:09:09.090272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.184 [2024-07-25 09:09:09.090312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.184 [2024-07-25 09:09:09.097240] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.184 [2024-07-25 09:09:09.097596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.184 [2024-07-25 09:09:09.097637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.184 [2024-07-25 09:09:09.104355] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.184 [2024-07-25 09:09:09.104702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.184 [2024-07-25 09:09:09.104755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.184 [2024-07-25 09:09:09.111242] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.184 [2024-07-25 09:09:09.111609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.184 [2024-07-25 09:09:09.111651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.184 [2024-07-25 09:09:09.118239] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.184 [2024-07-25 09:09:09.118598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.184 [2024-07-25 09:09:09.118639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.184 [2024-07-25 09:09:09.125736] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.184 [2024-07-25 09:09:09.126130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.184 [2024-07-25 09:09:09.126182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.184 [2024-07-25 09:09:09.133331] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.184 [2024-07-25 09:09:09.133686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.184 [2024-07-25 09:09:09.133727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.184 [2024-07-25 09:09:09.140826] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.184 [2024-07-25 09:09:09.141189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.184 [2024-07-25 09:09:09.141243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.185 [2024-07-25 09:09:09.148450] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.185 [2024-07-25 09:09:09.148808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.185 [2024-07-25 09:09:09.148875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.185 [2024-07-25 09:09:09.155959] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.185 [2024-07-25 09:09:09.156337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.185 [2024-07-25 09:09:09.156378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.185 [2024-07-25 09:09:09.163480] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.185 [2024-07-25 09:09:09.163904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.185 [2024-07-25 09:09:09.163954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.185 [2024-07-25 09:09:09.171008] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.185 [2024-07-25 09:09:09.171364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.185 [2024-07-25 09:09:09.171413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.185 [2024-07-25 09:09:09.178371] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.185 [2024-07-25 09:09:09.178728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.185 [2024-07-25 09:09:09.178770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.185 [2024-07-25 09:09:09.185744] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.185 [2024-07-25 09:09:09.186117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.185 [2024-07-25 09:09:09.186170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.185 [2024-07-25 09:09:09.193257] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.185 [2024-07-25 09:09:09.193611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.185 [2024-07-25 09:09:09.193652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.185 [2024-07-25 09:09:09.200817] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.185 [2024-07-25 09:09:09.201179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.185 [2024-07-25 09:09:09.201249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.185 [2024-07-25 09:09:09.208003] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.185 [2024-07-25 09:09:09.208361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.185 [2024-07-25 09:09:09.208420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.185 [2024-07-25 09:09:09.215305] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.185 [2024-07-25 09:09:09.215659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.185 [2024-07-25 09:09:09.215711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.185 [2024-07-25 09:09:09.222575] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.185 [2024-07-25 09:09:09.222927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.185 [2024-07-25 09:09:09.222970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.185 [2024-07-25 09:09:09.229728] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.185 [2024-07-25 09:09:09.230128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.185 [2024-07-25 09:09:09.230168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.185 [2024-07-25 09:09:09.237114] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.185 [2024-07-25 09:09:09.237484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.185 [2024-07-25 09:09:09.237525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.185 [2024-07-25 09:09:09.244630] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.185 [2024-07-25 09:09:09.244992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.185 [2024-07-25 09:09:09.245052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.185 [2024-07-25 09:09:09.251955] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.185 [2024-07-25 09:09:09.252315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.185 [2024-07-25 09:09:09.252363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.185 [2024-07-25 09:09:09.259215] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.185 [2024-07-25 09:09:09.259588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.185 [2024-07-25 09:09:09.259629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.185 [2024-07-25 09:09:09.266733] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.185 [2024-07-25 09:09:09.267095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.185 [2024-07-25 09:09:09.267151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.185 [2024-07-25 09:09:09.274022] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.185 [2024-07-25 09:09:09.274379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.185 [2024-07-25 09:09:09.274427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.185 [2024-07-25 09:09:09.281635] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.185 [2024-07-25 09:09:09.282022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.185 [2024-07-25 09:09:09.282076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.185 [2024-07-25 09:09:09.289422] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.185 [2024-07-25 09:09:09.289796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.185 [2024-07-25 09:09:09.289871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.454 [2024-07-25 09:09:09.297437] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.454 [2024-07-25 09:09:09.297804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.454 [2024-07-25 09:09:09.297864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.454 [2024-07-25 09:09:09.304831] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.454 [2024-07-25 09:09:09.305187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.454 [2024-07-25 09:09:09.305250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.454 [2024-07-25 09:09:09.312158] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.454 [2024-07-25 09:09:09.312525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.454 [2024-07-25 09:09:09.312566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.454 [2024-07-25 09:09:09.319326] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.454 [2024-07-25 09:09:09.319681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.454 [2024-07-25 09:09:09.319725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.454 [2024-07-25 09:09:09.326351] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.454 [2024-07-25 09:09:09.326707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.454 [2024-07-25 09:09:09.326761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.454 [2024-07-25 09:09:09.333388] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.454 [2024-07-25 09:09:09.333744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.455 [2024-07-25 09:09:09.333785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.455 [2024-07-25 09:09:09.340542] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.455 [2024-07-25 09:09:09.340909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.455 [2024-07-25 09:09:09.340962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.455 [2024-07-25 09:09:09.347469] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.455 [2024-07-25 09:09:09.347832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.455 [2024-07-25 09:09:09.347889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.455 [2024-07-25 09:09:09.354714] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.455 [2024-07-25 09:09:09.355087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.455 [2024-07-25 09:09:09.355128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.455 [2024-07-25 09:09:09.361789] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.455 [2024-07-25 09:09:09.362150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.455 [2024-07-25 09:09:09.362200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.455 [2024-07-25 09:09:09.369085] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.455 [2024-07-25 09:09:09.369451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.455 [2024-07-25 09:09:09.369505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.455 [2024-07-25 09:09:09.376262] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.455 [2024-07-25 09:09:09.376637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.455 [2024-07-25 09:09:09.376679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.455 [2024-07-25 09:09:09.383552] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.455 [2024-07-25 09:09:09.383923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.455 [2024-07-25 09:09:09.383973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.455 [2024-07-25 09:09:09.390940] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.455 [2024-07-25 09:09:09.391300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.455 [2024-07-25 09:09:09.391341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.455 [2024-07-25 09:09:09.398371] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.455 [2024-07-25 09:09:09.398736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.455 [2024-07-25 09:09:09.398777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.455 [2024-07-25 09:09:09.405946] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.455 [2024-07-25 09:09:09.406328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.455 [2024-07-25 09:09:09.406382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.455 [2024-07-25 09:09:09.413473] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.455 [2024-07-25 09:09:09.413925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.455 [2024-07-25 09:09:09.413966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.455 [2024-07-25 09:09:09.421299] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.455 [2024-07-25 09:09:09.421677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.455 [2024-07-25 09:09:09.421737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.455 [2024-07-25 09:09:09.428824] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.455 [2024-07-25 09:09:09.429176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.455 [2024-07-25 09:09:09.429241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.455 [2024-07-25 09:09:09.436276] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.455 [2024-07-25 09:09:09.436664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.455 [2024-07-25 09:09:09.436706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.455 [2024-07-25 09:09:09.443601] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.455 [2024-07-25 09:09:09.444001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.455 [2024-07-25 09:09:09.444052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.455 [2024-07-25 09:09:09.450851] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.455 [2024-07-25 09:09:09.451195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.455 [2024-07-25 09:09:09.451247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.455 [2024-07-25 09:09:09.457882] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.455 [2024-07-25 09:09:09.458235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.455 [2024-07-25 09:09:09.458275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.455 [2024-07-25 09:09:09.465186] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.455 [2024-07-25 09:09:09.465556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.455 [2024-07-25 09:09:09.465605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.455 [2024-07-25 09:09:09.472328] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.455 [2024-07-25 09:09:09.472726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.455 [2024-07-25 09:09:09.472768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.455 [2024-07-25 09:09:09.479325] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.455 [2024-07-25 09:09:09.479701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.455 [2024-07-25 09:09:09.479754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.455 [2024-07-25 09:09:09.486410] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.455 [2024-07-25 09:09:09.486748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.455 [2024-07-25 09:09:09.486809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.455 [2024-07-25 09:09:09.493878] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.455 [2024-07-25 09:09:09.494287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.455 [2024-07-25 09:09:09.494344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.455 [2024-07-25 09:09:09.501390] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.455 [2024-07-25 09:09:09.501743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.455 [2024-07-25 09:09:09.501790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.455 [2024-07-25 09:09:09.508914] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.455 [2024-07-25 09:09:09.509295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.455 [2024-07-25 09:09:09.509345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.455 [2024-07-25 09:09:09.516378] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.455 [2024-07-25 09:09:09.516777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.455 [2024-07-25 09:09:09.516830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.456 [2024-07-25 09:09:09.523856] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.456 [2024-07-25 09:09:09.524228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.456 [2024-07-25 09:09:09.524278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.456 [2024-07-25 09:09:09.531337] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.456 [2024-07-25 09:09:09.531709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.456 [2024-07-25 09:09:09.531759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.456 [2024-07-25 09:09:09.538889] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.456 [2024-07-25 09:09:09.539247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.456 [2024-07-25 09:09:09.539293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.456 [2024-07-25 09:09:09.546433] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.456 [2024-07-25 09:09:09.546773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.456 [2024-07-25 09:09:09.546840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.456 [2024-07-25 09:09:09.553803] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.456 [2024-07-25 09:09:09.554179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.456 [2024-07-25 09:09:09.554236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.456 [2024-07-25 09:09:09.561152] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.456 [2024-07-25 09:09:09.561498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.456 [2024-07-25 09:09:09.561549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.718 [2024-07-25 09:09:09.568661] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.718 [2024-07-25 09:09:09.569023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.718 [2024-07-25 09:09:09.569064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.718 [2024-07-25 09:09:09.576071] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.718 [2024-07-25 09:09:09.576459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.718 [2024-07-25 09:09:09.576500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.718 [2024-07-25 09:09:09.583652] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.718 [2024-07-25 09:09:09.584036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.718 [2024-07-25 09:09:09.584077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.718 [2024-07-25 09:09:09.591305] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.718 [2024-07-25 09:09:09.591674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.718 [2024-07-25 09:09:09.591721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.718 [2024-07-25 09:09:09.599312] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.718 [2024-07-25 09:09:09.599665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.718 [2024-07-25 09:09:09.599718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.718 [2024-07-25 09:09:09.607166] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.718 [2024-07-25 09:09:09.607542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.718 [2024-07-25 09:09:09.607591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.718 [2024-07-25 09:09:09.614832] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.718 [2024-07-25 09:09:09.615202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.718 [2024-07-25 09:09:09.615244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.718 [2024-07-25 09:09:09.622555] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.718 [2024-07-25 09:09:09.622913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.718 [2024-07-25 09:09:09.622956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.718 [2024-07-25 09:09:09.629872] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.718 [2024-07-25 09:09:09.630216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.718 [2024-07-25 09:09:09.630257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.718 [2024-07-25 09:09:09.637148] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.718 [2024-07-25 09:09:09.637492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.718 [2024-07-25 09:09:09.637533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.718 [2024-07-25 09:09:09.644467] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.718 [2024-07-25 09:09:09.644847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.718 [2024-07-25 09:09:09.644889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.718 [2024-07-25 09:09:09.652107] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.718 [2024-07-25 09:09:09.652455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.718 [2024-07-25 09:09:09.652509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.718 [2024-07-25 09:09:09.659429] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.718 [2024-07-25 09:09:09.659786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.718 [2024-07-25 09:09:09.659851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.718 [2024-07-25 09:09:09.666895] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.718 [2024-07-25 09:09:09.667257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.718 [2024-07-25 09:09:09.667300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.718 [2024-07-25 09:09:09.674330] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.718 [2024-07-25 09:09:09.674683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.718 [2024-07-25 09:09:09.674724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.718 [2024-07-25 09:09:09.681803] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.718 [2024-07-25 09:09:09.682160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.718 [2024-07-25 09:09:09.682202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.718 [2024-07-25 09:09:09.689328] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.718 [2024-07-25 09:09:09.689703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.718 [2024-07-25 09:09:09.689769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.718 [2024-07-25 09:09:09.696973] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.718 [2024-07-25 09:09:09.697331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.718 [2024-07-25 09:09:09.697371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.718 [2024-07-25 09:09:09.704278] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.718 [2024-07-25 09:09:09.704624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.718 [2024-07-25 09:09:09.704675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.718 [2024-07-25 09:09:09.711732] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.718 [2024-07-25 09:09:09.712154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.718 [2024-07-25 09:09:09.712195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.718 [2024-07-25 09:09:09.719123] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.718 [2024-07-25 09:09:09.719534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.718 [2024-07-25 09:09:09.719576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.718 [2024-07-25 09:09:09.726466] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.718 [2024-07-25 09:09:09.726808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.718 [2024-07-25 09:09:09.726879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.718 [2024-07-25 09:09:09.733727] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.718 [2024-07-25 09:09:09.734095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.718 [2024-07-25 09:09:09.734137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.718 [2024-07-25 09:09:09.740817] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.718 [2024-07-25 09:09:09.741180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.719 [2024-07-25 09:09:09.741243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.719 [2024-07-25 09:09:09.748030] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.719 [2024-07-25 09:09:09.748396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.719 [2024-07-25 09:09:09.748436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.719 [2024-07-25 09:09:09.755144] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.719 [2024-07-25 09:09:09.755498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.719 [2024-07-25 09:09:09.755539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.719 [2024-07-25 09:09:09.762304] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.719 [2024-07-25 09:09:09.762663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.719 [2024-07-25 09:09:09.762705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.719 [2024-07-25 09:09:09.769736] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.719 [2024-07-25 09:09:09.770114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.719 [2024-07-25 09:09:09.770156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.719 [2024-07-25 09:09:09.777108] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.719 [2024-07-25 09:09:09.777468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.719 [2024-07-25 09:09:09.777508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.719 [2024-07-25 09:09:09.784488] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.719 [2024-07-25 09:09:09.784851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.719 [2024-07-25 09:09:09.784893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.719 [2024-07-25 09:09:09.791856] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.719 [2024-07-25 09:09:09.792229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.719 [2024-07-25 09:09:09.792271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.719 [2024-07-25 09:09:09.799192] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.719 [2024-07-25 09:09:09.799544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.719 [2024-07-25 09:09:09.799594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.719 [2024-07-25 09:09:09.806447] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.719 [2024-07-25 09:09:09.806847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.719 [2024-07-25 09:09:09.806888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.719 [2024-07-25 09:09:09.813625] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.719 [2024-07-25 09:09:09.814010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.719 [2024-07-25 09:09:09.814051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.719 [2024-07-25 09:09:09.820706] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.719 [2024-07-25 09:09:09.821087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.719 [2024-07-25 09:09:09.821128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.719 [2024-07-25 09:09:09.827781] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.719 [2024-07-25 09:09:09.828161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.719 [2024-07-25 09:09:09.828203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.978 [2024-07-25 09:09:09.834963] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.978 [2024-07-25 09:09:09.835349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.978 [2024-07-25 09:09:09.835390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.979 [2024-07-25 09:09:09.842262] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.979 [2024-07-25 09:09:09.842621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.979 [2024-07-25 09:09:09.842662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.979 [2024-07-25 09:09:09.849581] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.979 [2024-07-25 09:09:09.849946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.979 [2024-07-25 09:09:09.849988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.979 [2024-07-25 09:09:09.856591] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.979 [2024-07-25 09:09:09.856959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.979 [2024-07-25 09:09:09.857001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.979 [2024-07-25 09:09:09.863395] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.979 [2024-07-25 09:09:09.863745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.979 [2024-07-25 09:09:09.863787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.979 [2024-07-25 09:09:09.870718] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.979 [2024-07-25 09:09:09.871082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.979 [2024-07-25 09:09:09.871124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.979 [2024-07-25 09:09:09.878256] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.979 [2024-07-25 09:09:09.878608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.979 [2024-07-25 09:09:09.878650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.979 [2024-07-25 09:09:09.885749] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.979 [2024-07-25 09:09:09.886108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.979 [2024-07-25 09:09:09.886151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.979 [2024-07-25 09:09:09.892966] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.979 [2024-07-25 09:09:09.893312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.979 [2024-07-25 09:09:09.893354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.979 [2024-07-25 09:09:09.900161] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.979 [2024-07-25 09:09:09.900513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.979 [2024-07-25 09:09:09.900565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.979 [2024-07-25 09:09:09.907493] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.979 [2024-07-25 09:09:09.907887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.979 [2024-07-25 09:09:09.907930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.979 [2024-07-25 09:09:09.914992] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.979 [2024-07-25 09:09:09.915346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.979 [2024-07-25 09:09:09.915387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.979 [2024-07-25 09:09:09.922288] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.979 [2024-07-25 09:09:09.922656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.979 [2024-07-25 09:09:09.922698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.979 [2024-07-25 09:09:09.929628] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.979 [2024-07-25 09:09:09.930007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.979 [2024-07-25 09:09:09.930052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.979 [2024-07-25 09:09:09.937059] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.979 [2024-07-25 09:09:09.937413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.979 [2024-07-25 09:09:09.937454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.979 [2024-07-25 09:09:09.944376] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.979 [2024-07-25 09:09:09.944743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.979 [2024-07-25 09:09:09.944785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.979 [2024-07-25 09:09:09.951708] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.979 [2024-07-25 09:09:09.952078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.979 [2024-07-25 09:09:09.952120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.979 [2024-07-25 09:09:09.959201] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.979 [2024-07-25 09:09:09.959549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.979 [2024-07-25 09:09:09.959599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.979 [2024-07-25 09:09:09.966694] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.979 [2024-07-25 09:09:09.967062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.979 [2024-07-25 09:09:09.967104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.979 [2024-07-25 09:09:09.973825] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.979 [2024-07-25 09:09:09.974185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.979 [2024-07-25 09:09:09.974226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.979 [2024-07-25 09:09:09.981119] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.979 [2024-07-25 09:09:09.981476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.979 [2024-07-25 09:09:09.981525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.979 [2024-07-25 09:09:09.988342] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.979 [2024-07-25 09:09:09.988690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.979 [2024-07-25 09:09:09.988741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.979 [2024-07-25 09:09:09.995738] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.979 [2024-07-25 09:09:09.996135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.979 [2024-07-25 09:09:09.996183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.979 [2024-07-25 09:09:10.003383] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.979 [2024-07-25 09:09:10.003745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.980 [2024-07-25 09:09:10.003793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.980 [2024-07-25 09:09:10.010775] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.980 [2024-07-25 09:09:10.011156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.980 [2024-07-25 09:09:10.011204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.980 [2024-07-25 09:09:10.018478] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.980 [2024-07-25 09:09:10.018872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.980 [2024-07-25 09:09:10.018913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.980 [2024-07-25 09:09:10.025833] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.980 [2024-07-25 09:09:10.026195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.980 [2024-07-25 09:09:10.026244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.980 [2024-07-25 09:09:10.033405] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.980 [2024-07-25 09:09:10.033770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.980 [2024-07-25 09:09:10.033824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.980 [2024-07-25 09:09:10.040807] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.980 [2024-07-25 09:09:10.041164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.980 [2024-07-25 09:09:10.041218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.980 [2024-07-25 09:09:10.047944] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.980 [2024-07-25 09:09:10.048295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.980 [2024-07-25 09:09:10.048355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.980 [2024-07-25 09:09:10.054953] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.980 [2024-07-25 09:09:10.055297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.980 [2024-07-25 09:09:10.055338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.980 [2024-07-25 09:09:10.061935] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.980 [2024-07-25 09:09:10.062296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.980 [2024-07-25 09:09:10.062337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.980 [2024-07-25 09:09:10.068978] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.980 [2024-07-25 09:09:10.069323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.980 [2024-07-25 09:09:10.069364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.980 [2024-07-25 09:09:10.076034] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.980 [2024-07-25 09:09:10.076395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.980 [2024-07-25 09:09:10.076443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.980 [2024-07-25 09:09:10.083017] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.980 [2024-07-25 09:09:10.083359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.980 [2024-07-25 09:09:10.083401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.980 [2024-07-25 09:09:10.090312] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:02.980 [2024-07-25 09:09:10.090680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.980 [2024-07-25 09:09:10.090721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.239 [2024-07-25 09:09:10.097725] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:03.239 [2024-07-25 09:09:10.098126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.239 [2024-07-25 09:09:10.098168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:03.239 [2024-07-25 09:09:10.104746] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:03.240 [2024-07-25 09:09:10.105108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.240 [2024-07-25 09:09:10.105149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:03.240 [2024-07-25 09:09:10.111527] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:03.240 [2024-07-25 09:09:10.111924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.240 [2024-07-25 09:09:10.111965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:03.240 [2024-07-25 09:09:10.118335] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:03.240 [2024-07-25 09:09:10.118692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.240 [2024-07-25 09:09:10.118734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.240 [2024-07-25 09:09:10.125130] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:03.240 [2024-07-25 09:09:10.125481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.240 [2024-07-25 09:09:10.125522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:03.240 [2024-07-25 09:09:10.132122] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:03.240 [2024-07-25 09:09:10.132502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.240 [2024-07-25 09:09:10.132543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:03.240 [2024-07-25 09:09:10.138761] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:03.240 [2024-07-25 09:09:10.139115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.240 [2024-07-25 09:09:10.139157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:03.240 [2024-07-25 09:09:10.145449] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:03.240 [2024-07-25 09:09:10.145793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.240 [2024-07-25 09:09:10.145850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.240 [2024-07-25 09:09:10.152787] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:03.240 [2024-07-25 09:09:10.153174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.240 [2024-07-25 09:09:10.153237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:03.240 [2024-07-25 09:09:10.159675] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:03.240 [2024-07-25 09:09:10.160045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.240 [2024-07-25 09:09:10.160087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:03.240 [2024-07-25 09:09:10.166225] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:03.240 [2024-07-25 09:09:10.166577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.240 [2024-07-25 09:09:10.166618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:03.240 [2024-07-25 09:09:10.173082] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:03.240 [2024-07-25 09:09:10.173421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.240 [2024-07-25 09:09:10.173462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.240 [2024-07-25 09:09:10.179663] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:03.240 [2024-07-25 09:09:10.180028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.240 [2024-07-25 09:09:10.180062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:03.240 [2024-07-25 09:09:10.186273] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:03.240 [2024-07-25 09:09:10.186614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.240 [2024-07-25 09:09:10.186655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:03.240 [2024-07-25 09:09:10.193286] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:03.240 [2024-07-25 09:09:10.193666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.240 [2024-07-25 09:09:10.193708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:03.240 [2024-07-25 09:09:10.200666] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:03.240 [2024-07-25 09:09:10.201037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.240 [2024-07-25 09:09:10.201077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.240 [2024-07-25 09:09:10.207269] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:03.240 [2024-07-25 09:09:10.207610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.240 [2024-07-25 09:09:10.207651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:03.240 [2024-07-25 09:09:10.214231] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:03.240 [2024-07-25 09:09:10.214572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.240 [2024-07-25 09:09:10.214628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:03.240 [2024-07-25 09:09:10.221425] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:03.240 [2024-07-25 09:09:10.221780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.240 [2024-07-25 09:09:10.221834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:03.240 [2024-07-25 09:09:10.228875] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:03.240 [2024-07-25 09:09:10.229259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.240 [2024-07-25 09:09:10.229300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.240 [2024-07-25 09:09:10.236138] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:03.240 [2024-07-25 09:09:10.236538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.240 [2024-07-25 09:09:10.236608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:03.240 [2024-07-25 09:09:10.243283] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:03.240 [2024-07-25 09:09:10.243629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.240 [2024-07-25 09:09:10.243671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:03.240 [2024-07-25 09:09:10.250892] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:03.240 [2024-07-25 09:09:10.251289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.240 [2024-07-25 09:09:10.251329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:03.240 [2024-07-25 09:09:10.258792] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:03.240 [2024-07-25 09:09:10.259178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.240 [2024-07-25 09:09:10.259218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.240 [2024-07-25 09:09:10.266134] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:03.240 [2024-07-25 09:09:10.266497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.240 [2024-07-25 09:09:10.266539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:03.240 [2024-07-25 09:09:10.273225] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:03.241 [2024-07-25 09:09:10.273596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.241 [2024-07-25 09:09:10.273636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:03.241 [2024-07-25 09:09:10.280450] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:03.241 [2024-07-25 09:09:10.280863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.241 [2024-07-25 09:09:10.280903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:03.241 [2024-07-25 09:09:10.288143] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:03.241 [2024-07-25 09:09:10.288498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.241 [2024-07-25 09:09:10.288539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.241 [2024-07-25 09:09:10.295627] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:03.241 [2024-07-25 09:09:10.296032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.241 [2024-07-25 09:09:10.296073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:03.241 [2024-07-25 09:09:10.303110] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:03.241 [2024-07-25 09:09:10.303457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.241 [2024-07-25 09:09:10.303498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:03.241 [2024-07-25 09:09:10.310367] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:03.241 [2024-07-25 09:09:10.310706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.241 [2024-07-25 09:09:10.310748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:03.241 [2024-07-25 09:09:10.317596] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:03.241 [2024-07-25 09:09:10.317970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.241 [2024-07-25 09:09:10.318010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.241 [2024-07-25 09:09:10.324800] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:03.241 [2024-07-25 09:09:10.325164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.241 [2024-07-25 09:09:10.325206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:03.241 [2024-07-25 09:09:10.331642] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:03.241 [2024-07-25 09:09:10.332009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.241 [2024-07-25 09:09:10.332044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:03.241 00:27:03.241 Latency(us) 00:27:03.241 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:03.241 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:03.241 nvme0n1 : 2.00 4123.39 515.42 0.00 0.00 3871.22 2993.80 11141.12 00:27:03.241 =================================================================================================================== 00:27:03.241 Total : 4123.39 515.42 0.00 0.00 3871.22 2993.80 11141.12 00:27:03.241 0 00:27:03.500 09:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:03.500 09:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:03.500 09:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:03.500 09:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:03.500 | .driver_specific 00:27:03.500 | .nvme_error 00:27:03.500 | .status_code 00:27:03.500 | .command_transient_transport_error' 00:27:03.500 09:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 266 > 0 )) 00:27:03.500 09:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 86431 00:27:03.500 09:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 86431 ']' 00:27:03.500 09:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 86431 00:27:03.500 09:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:27:03.500 09:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:03.500 09:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86431 00:27:03.758 09:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:03.758 09:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:03.758 killing process with pid 86431 00:27:03.758 09:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86431' 00:27:03.758 09:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 86431 00:27:03.759 Received shutdown signal, test time was about 2.000000 seconds 00:27:03.759 00:27:03.759 Latency(us) 00:27:03.759 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:03.759 =================================================================================================================== 00:27:03.759 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:03.759 09:09:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 86431 00:27:04.695 09:09:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 86188 00:27:04.695 09:09:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 86188 ']' 00:27:04.695 09:09:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 86188 00:27:04.954 09:09:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:27:04.954 09:09:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:04.954 09:09:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86188 00:27:04.954 09:09:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:04.954 09:09:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:04.954 killing process with pid 86188 00:27:04.954 09:09:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86188' 00:27:04.954 09:09:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 86188 00:27:04.954 09:09:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 86188 00:27:06.332 ************************************ 00:27:06.332 END TEST nvmf_digest_error 00:27:06.332 ************************************ 00:27:06.332 00:27:06.332 real 0m23.733s 00:27:06.333 user 0m44.337s 00:27:06.333 sys 0m5.402s 00:27:06.333 09:09:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:06.333 09:09:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:06.333 09:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:27:06.333 09:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:27:06.333 09:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:06.333 09:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:27:06.333 09:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:06.333 09:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:27:06.333 09:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:06.333 09:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:06.333 rmmod nvme_tcp 00:27:06.333 rmmod nvme_fabrics 00:27:06.333 rmmod nvme_keyring 00:27:06.333 09:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:06.333 09:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:27:06.333 09:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:27:06.333 09:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 86188 ']' 00:27:06.333 09:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 86188 00:27:06.333 09:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 86188 ']' 00:27:06.333 09:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 86188 00:27:06.333 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (86188) - No such process 00:27:06.333 09:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 86188 is not found' 00:27:06.333 Process with pid 86188 is not found 00:27:06.333 09:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:06.333 09:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:06.333 09:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:06.333 09:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:06.333 09:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:06.333 09:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:06.333 09:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:06.333 09:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:06.333 09:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:27:06.333 00:27:06.333 real 0m49.164s 00:27:06.333 user 1m31.312s 00:27:06.333 sys 0m10.685s 00:27:06.333 09:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:06.333 ************************************ 00:27:06.333 END TEST nvmf_digest 00:27:06.333 ************************************ 00:27:06.333 09:09:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:06.333 09:09:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:27:06.333 09:09:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:27:06.333 09:09:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:27:06.333 09:09:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:06.333 09:09:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:06.333 09:09:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.333 ************************************ 00:27:06.333 START TEST nvmf_host_multipath 00:27:06.333 ************************************ 00:27:06.333 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:27:06.333 * Looking for test storage... 00:27:06.593 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:27:06.593 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:06.593 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:27:06.593 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:06.594 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:06.594 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:06.594 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:06.594 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:06.594 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:06.594 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:06.594 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:06.594 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:06.594 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:06.594 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:27:06.594 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:27:06.594 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:06.594 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:06.594 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:06.594 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:06.594 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:06.594 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:06.594 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:06.594 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:06.594 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.594 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.594 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.594 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:27:06.594 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.594 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@47 -- # : 0 00:27:06.594 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:06.594 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:06.594 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:06.594 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:06.594 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:06.594 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:06.594 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:06.594 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:06.594 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:06.594 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:06.594 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:06.594 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:27:06.594 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:06.594 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:27:06.594 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:27:06.594 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:06.594 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:06.594 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:06.594 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:06.594 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:06.594 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:06.594 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:06.594 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:06.594 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:27:06.594 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:27:06.594 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:27:06.594 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:27:06.594 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:27:06.594 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:27:06.594 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:06.594 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:06.594 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:27:06.594 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:27:06.594 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:06.594 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:06.594 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:06.594 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:06.594 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:06.594 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:06.594 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:06.594 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:06.594 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:27:06.594 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:27:06.594 Cannot find device "nvmf_tgt_br" 00:27:06.594 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # true 00:27:06.594 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:27:06.594 Cannot find device "nvmf_tgt_br2" 00:27:06.594 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # true 00:27:06.594 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:27:06.594 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:27:06.594 Cannot find device "nvmf_tgt_br" 00:27:06.594 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # true 00:27:06.594 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:27:06.594 Cannot find device "nvmf_tgt_br2" 00:27:06.594 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # true 00:27:06.594 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:27:06.594 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:27:06.595 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:06.595 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:06.595 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:27:06.595 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:06.595 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:06.595 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:27:06.595 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:27:06.595 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:06.595 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:06.595 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:06.595 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:06.595 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:06.595 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:06.595 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:27:06.595 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:27:06.853 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:27:06.853 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:27:06.853 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:27:06.853 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:27:06.853 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:06.853 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:06.853 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:06.853 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:27:06.853 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:27:06.853 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:27:06.853 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:06.853 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:06.853 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:06.853 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:06.853 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:27:06.853 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:06.853 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:27:06.853 00:27:06.853 --- 10.0.0.2 ping statistics --- 00:27:06.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:06.853 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:27:06.853 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:27:06.853 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:06.853 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:27:06.853 00:27:06.853 --- 10.0.0.3 ping statistics --- 00:27:06.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:06.853 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:27:06.853 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:06.853 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:06.853 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:27:06.853 00:27:06.853 --- 10.0.0.1 ping statistics --- 00:27:06.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:06.853 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:27:06.853 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:06.853 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@433 -- # return 0 00:27:06.853 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:06.853 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:06.853 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:06.853 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:06.853 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:06.853 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:06.853 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:06.853 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:27:06.854 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:06.854 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:06.854 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:27:06.854 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@481 -- # nvmfpid=86708 00:27:06.854 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # waitforlisten 86708 00:27:06.854 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@831 -- # '[' -z 86708 ']' 00:27:06.854 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:06.854 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:06.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:06.854 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:06.854 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:06.854 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:27:06.854 09:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:27:06.854 [2024-07-25 09:09:13.938017] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:27:06.854 [2024-07-25 09:09:13.938186] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:07.112 [2024-07-25 09:09:14.104684] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:07.371 [2024-07-25 09:09:14.383245] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:07.371 [2024-07-25 09:09:14.383321] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:07.371 [2024-07-25 09:09:14.383339] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:07.371 [2024-07-25 09:09:14.383354] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:07.371 [2024-07-25 09:09:14.383366] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:07.371 [2024-07-25 09:09:14.383531] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:07.371 [2024-07-25 09:09:14.383552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:07.630 [2024-07-25 09:09:14.617383] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:27:07.889 09:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:07.889 09:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # return 0 00:27:07.889 09:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:07.889 09:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:07.889 09:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:27:07.889 09:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:07.889 09:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=86708 00:27:07.889 09:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:08.147 [2024-07-25 09:09:15.182773] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:08.147 09:09:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:27:08.714 Malloc0 00:27:08.715 09:09:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:27:08.973 09:09:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:09.232 09:09:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:09.232 [2024-07-25 09:09:16.308324] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:09.232 09:09:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:09.489 [2024-07-25 09:09:16.540592] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:09.489 09:09:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=86764 00:27:09.489 09:09:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:09.489 09:09:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:27:09.489 09:09:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 86764 /var/tmp/bdevperf.sock 00:27:09.489 09:09:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@831 -- # '[' -z 86764 ']' 00:27:09.489 09:09:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:09.489 09:09:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:09.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:09.489 09:09:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:09.489 09:09:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:09.489 09:09:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:27:10.866 09:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:10.866 09:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # return 0 00:27:10.866 09:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:27:10.866 09:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:27:11.124 Nvme0n1 00:27:11.124 09:09:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:27:11.382 Nvme0n1 00:27:11.640 09:09:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:27:11.640 09:09:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:27:12.577 09:09:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:27:12.577 09:09:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:12.835 09:09:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:13.093 09:09:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:27:13.093 09:09:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=86809 00:27:13.093 09:09:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86708 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:27:13.093 09:09:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:27:19.657 09:09:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:27:19.657 09:09:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:27:19.657 09:09:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:27:19.657 09:09:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:19.657 Attaching 4 probes... 00:27:19.657 @path[10.0.0.2, 4421]: 11788 00:27:19.657 @path[10.0.0.2, 4421]: 12464 00:27:19.657 @path[10.0.0.2, 4421]: 12488 00:27:19.657 @path[10.0.0.2, 4421]: 12286 00:27:19.657 @path[10.0.0.2, 4421]: 12424 00:27:19.657 09:09:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:27:19.657 09:09:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:27:19.657 09:09:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:27:19.657 09:09:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:27:19.657 09:09:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:27:19.657 09:09:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:27:19.657 09:09:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 86809 00:27:19.657 09:09:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:19.657 09:09:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:27:19.657 09:09:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:19.657 09:09:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:19.915 09:09:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:27:19.915 09:09:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86708 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:27:19.915 09:09:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=86923 00:27:19.915 09:09:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:27:26.555 09:09:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:27:26.555 09:09:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:27:26.555 09:09:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:27:26.555 09:09:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:26.555 Attaching 4 probes... 00:27:26.555 @path[10.0.0.2, 4420]: 12255 00:27:26.555 @path[10.0.0.2, 4420]: 12895 00:27:26.555 @path[10.0.0.2, 4420]: 12760 00:27:26.555 @path[10.0.0.2, 4420]: 13798 00:27:26.555 @path[10.0.0.2, 4420]: 13648 00:27:26.555 09:09:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:27:26.555 09:09:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:27:26.555 09:09:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:27:26.555 09:09:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:27:26.555 09:09:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:27:26.555 09:09:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:27:26.555 09:09:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 86923 00:27:26.555 09:09:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:26.555 09:09:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:27:26.555 09:09:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:27:26.555 09:09:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:26.813 09:09:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:27:26.813 09:09:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=87036 00:27:26.813 09:09:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:27:26.813 09:09:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86708 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:27:33.372 09:09:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:27:33.372 09:09:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:27:33.372 09:09:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:27:33.372 09:09:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:33.372 Attaching 4 probes... 00:27:33.372 @path[10.0.0.2, 4421]: 10365 00:27:33.372 @path[10.0.0.2, 4421]: 12787 00:27:33.372 @path[10.0.0.2, 4421]: 12769 00:27:33.372 @path[10.0.0.2, 4421]: 12829 00:27:33.372 @path[10.0.0.2, 4421]: 12744 00:27:33.372 09:09:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:27:33.372 09:09:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:27:33.372 09:09:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:27:33.372 09:09:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:27:33.372 09:09:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:27:33.372 09:09:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:27:33.372 09:09:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 87036 00:27:33.372 09:09:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:33.373 09:09:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:27:33.373 09:09:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:27:33.373 09:09:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:33.631 09:09:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:27:33.631 09:09:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=87147 00:27:33.631 09:09:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86708 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:27:33.631 09:09:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:27:40.193 09:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:27:40.193 09:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:27:40.193 09:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:27:40.193 09:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:40.193 Attaching 4 probes... 00:27:40.193 00:27:40.193 00:27:40.193 00:27:40.193 00:27:40.193 00:27:40.193 09:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:27:40.193 09:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:27:40.193 09:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:27:40.193 09:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:27:40.193 09:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:27:40.193 09:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:27:40.193 09:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 87147 00:27:40.193 09:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:40.193 09:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:27:40.193 09:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:40.193 09:09:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:40.452 09:09:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:27:40.452 09:09:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=87261 00:27:40.452 09:09:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:27:40.452 09:09:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86708 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:27:47.014 09:09:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:27:47.014 09:09:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:27:47.014 09:09:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:27:47.014 09:09:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:47.014 Attaching 4 probes... 00:27:47.014 @path[10.0.0.2, 4421]: 11779 00:27:47.014 @path[10.0.0.2, 4421]: 12238 00:27:47.014 @path[10.0.0.2, 4421]: 12624 00:27:47.014 @path[10.0.0.2, 4421]: 12711 00:27:47.014 @path[10.0.0.2, 4421]: 12697 00:27:47.015 09:09:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:27:47.015 09:09:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:27:47.015 09:09:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:27:47.015 09:09:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:27:47.015 09:09:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:27:47.015 09:09:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:27:47.015 09:09:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 87261 00:27:47.015 09:09:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:47.015 09:09:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:47.015 09:09:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:27:47.951 09:09:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:27:47.951 09:09:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=87378 00:27:47.951 09:09:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:27:47.951 09:09:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86708 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:27:54.514 09:10:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:27:54.514 09:10:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:27:54.514 09:10:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:27:54.514 09:10:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:54.514 Attaching 4 probes... 00:27:54.514 @path[10.0.0.2, 4420]: 12025 00:27:54.514 @path[10.0.0.2, 4420]: 12853 00:27:54.514 @path[10.0.0.2, 4420]: 12970 00:27:54.514 @path[10.0.0.2, 4420]: 12949 00:27:54.514 @path[10.0.0.2, 4420]: 13069 00:27:54.514 09:10:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:27:54.514 09:10:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:27:54.514 09:10:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:27:54.514 09:10:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:27:54.514 09:10:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:27:54.514 09:10:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:27:54.514 09:10:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 87378 00:27:54.514 09:10:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:54.514 09:10:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:54.514 [2024-07-25 09:10:01.533784] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:54.514 09:10:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:54.772 09:10:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:28:01.389 09:10:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:28:01.389 09:10:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=87549 00:28:01.389 09:10:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86708 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:28:01.389 09:10:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:28:07.952 09:10:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:28:07.952 09:10:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:28:07.952 09:10:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:28:07.952 09:10:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:07.952 Attaching 4 probes... 00:28:07.952 @path[10.0.0.2, 4421]: 12952 00:28:07.952 @path[10.0.0.2, 4421]: 13634 00:28:07.952 @path[10.0.0.2, 4421]: 13268 00:28:07.952 @path[10.0.0.2, 4421]: 13244 00:28:07.952 @path[10.0.0.2, 4421]: 13407 00:28:07.952 09:10:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:28:07.952 09:10:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:28:07.952 09:10:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:28:07.952 09:10:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:28:07.952 09:10:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:28:07.952 09:10:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:28:07.952 09:10:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 87549 00:28:07.952 09:10:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:07.952 09:10:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 86764 00:28:07.952 09:10:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@950 -- # '[' -z 86764 ']' 00:28:07.952 09:10:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # kill -0 86764 00:28:07.952 09:10:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # uname 00:28:07.952 09:10:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:07.952 09:10:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86764 00:28:07.952 09:10:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:28:07.952 09:10:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:28:07.952 killing process with pid 86764 00:28:07.952 09:10:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86764' 00:28:07.952 09:10:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@969 -- # kill 86764 00:28:07.952 09:10:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@974 -- # wait 86764 00:28:07.952 Connection closed with partial response: 00:28:07.952 00:28:07.952 00:28:08.215 09:10:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 86764 00:28:08.215 09:10:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:28:08.215 [2024-07-25 09:09:16.651982] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:08.215 [2024-07-25 09:09:16.652179] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86764 ] 00:28:08.216 [2024-07-25 09:09:16.815047] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:08.216 [2024-07-25 09:09:17.119259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:08.216 [2024-07-25 09:09:17.367188] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:28:08.216 Running I/O for 90 seconds... 00:28:08.216 [2024-07-25 09:09:26.891765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.216 [2024-07-25 09:09:26.891901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:08.216 [2024-07-25 09:09:26.891993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:15560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.216 [2024-07-25 09:09:26.892024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:08.216 [2024-07-25 09:09:26.892057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:15568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.216 [2024-07-25 09:09:26.892080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:08.216 [2024-07-25 09:09:26.892110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:15576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.216 [2024-07-25 09:09:26.892131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:08.216 [2024-07-25 09:09:26.892176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:15584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.216 [2024-07-25 09:09:26.892201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:08.216 [2024-07-25 09:09:26.892237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.216 [2024-07-25 09:09:26.892260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:08.216 [2024-07-25 09:09:26.892303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:15600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.216 [2024-07-25 09:09:26.892323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:08.216 [2024-07-25 09:09:26.892358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:15608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.216 [2024-07-25 09:09:26.892380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:08.216 [2024-07-25 09:09:26.892414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:15616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.216 [2024-07-25 09:09:26.892436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:08.216 [2024-07-25 09:09:26.892467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:15624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.216 [2024-07-25 09:09:26.892488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:08.216 [2024-07-25 09:09:26.892523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:15632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.216 [2024-07-25 09:09:26.892564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:08.216 [2024-07-25 09:09:26.892597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:15640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.216 [2024-07-25 09:09:26.892617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:08.216 [2024-07-25 09:09:26.892647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:15648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.216 [2024-07-25 09:09:26.892667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:08.216 [2024-07-25 09:09:26.892707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:15656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.216 [2024-07-25 09:09:26.892730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:08.216 [2024-07-25 09:09:26.892764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:15664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.216 [2024-07-25 09:09:26.892787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:08.216 [2024-07-25 09:09:26.892841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.216 [2024-07-25 09:09:26.892865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:08.216 [2024-07-25 09:09:26.892895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:15680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.216 [2024-07-25 09:09:26.892916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:08.216 [2024-07-25 09:09:26.892946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:16144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.216 [2024-07-25 09:09:26.892967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:08.216 [2024-07-25 09:09:26.892996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:16152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.216 [2024-07-25 09:09:26.893024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:08.216 [2024-07-25 09:09:26.893061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:16160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.216 [2024-07-25 09:09:26.893084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:08.216 [2024-07-25 09:09:26.893132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.216 [2024-07-25 09:09:26.893154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:08.216 [2024-07-25 09:09:26.893199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:16176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.216 [2024-07-25 09:09:26.893220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:08.216 [2024-07-25 09:09:26.893264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:16184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.216 [2024-07-25 09:09:26.893310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:08.216 [2024-07-25 09:09:26.893342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:16192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.216 [2024-07-25 09:09:26.893380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:08.216 [2024-07-25 09:09:26.893437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:16200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.216 [2024-07-25 09:09:26.893464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:08.216 [2024-07-25 09:09:26.893497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:16208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.216 [2024-07-25 09:09:26.893518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:08.216 [2024-07-25 09:09:26.893549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.216 [2024-07-25 09:09:26.893570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:08.216 [2024-07-25 09:09:26.893601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:16224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.216 [2024-07-25 09:09:26.893622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:08.216 [2024-07-25 09:09:26.893652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.216 [2024-07-25 09:09:26.893672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:08.216 [2024-07-25 09:09:26.893701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:16240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.216 [2024-07-25 09:09:26.893721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:08.216 [2024-07-25 09:09:26.893752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:16248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.216 [2024-07-25 09:09:26.893772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:08.216 [2024-07-25 09:09:26.893802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.216 [2024-07-25 09:09:26.893823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:08.216 [2024-07-25 09:09:26.893853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:15696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.216 [2024-07-25 09:09:26.893886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:08.216 [2024-07-25 09:09:26.893921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:15704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.217 [2024-07-25 09:09:26.893952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:08.217 [2024-07-25 09:09:26.893982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.217 [2024-07-25 09:09:26.894013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:08.217 [2024-07-25 09:09:26.894046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.217 [2024-07-25 09:09:26.894067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:08.217 [2024-07-25 09:09:26.894108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:15728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.217 [2024-07-25 09:09:26.894129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:08.217 [2024-07-25 09:09:26.894158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:15736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.217 [2024-07-25 09:09:26.894178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:08.217 [2024-07-25 09:09:26.894208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:15744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.217 [2024-07-25 09:09:26.894229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:08.217 [2024-07-25 09:09:26.894259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:16256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.217 [2024-07-25 09:09:26.894280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:08.217 [2024-07-25 09:09:26.894315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:16264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.217 [2024-07-25 09:09:26.894337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:08.217 [2024-07-25 09:09:26.894368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:16272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.217 [2024-07-25 09:09:26.894396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:08.217 [2024-07-25 09:09:26.894426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:16280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.217 [2024-07-25 09:09:26.894447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:08.217 [2024-07-25 09:09:26.894476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:16288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.217 [2024-07-25 09:09:26.894497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:08.217 [2024-07-25 09:09:26.894527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:16296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.217 [2024-07-25 09:09:26.894547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:08.217 [2024-07-25 09:09:26.894586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.217 [2024-07-25 09:09:26.894629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:08.217 [2024-07-25 09:09:26.894662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:16312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.217 [2024-07-25 09:09:26.894684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:08.217 [2024-07-25 09:09:26.894724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:16320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.217 [2024-07-25 09:09:26.894746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:08.217 [2024-07-25 09:09:26.894776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:15752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.217 [2024-07-25 09:09:26.894797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:08.217 [2024-07-25 09:09:26.894844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:15760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.217 [2024-07-25 09:09:26.894868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:08.217 [2024-07-25 09:09:26.894899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:15768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.217 [2024-07-25 09:09:26.894919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:08.217 [2024-07-25 09:09:26.894949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:15776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.217 [2024-07-25 09:09:26.894971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:08.217 [2024-07-25 09:09:26.895002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:15784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.217 [2024-07-25 09:09:26.895023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:08.217 [2024-07-25 09:09:26.895053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:15792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.217 [2024-07-25 09:09:26.895073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:08.217 [2024-07-25 09:09:26.895109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.217 [2024-07-25 09:09:26.895129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:08.217 [2024-07-25 09:09:26.895160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:15808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.217 [2024-07-25 09:09:26.895181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:08.217 [2024-07-25 09:09:26.895210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:15816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.217 [2024-07-25 09:09:26.895230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:08.217 [2024-07-25 09:09:26.895261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:15824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.217 [2024-07-25 09:09:26.895281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:08.217 [2024-07-25 09:09:26.895312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:15832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.217 [2024-07-25 09:09:26.895333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:08.217 [2024-07-25 09:09:26.895372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:15840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.217 [2024-07-25 09:09:26.895394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:08.217 [2024-07-25 09:09:26.895434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:15848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.217 [2024-07-25 09:09:26.895458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:08.217 [2024-07-25 09:09:26.895488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.217 [2024-07-25 09:09:26.895508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:08.217 [2024-07-25 09:09:26.895539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:15864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.217 [2024-07-25 09:09:26.895560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:08.217 [2024-07-25 09:09:26.895589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:15872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.217 [2024-07-25 09:09:26.895611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:08.217 [2024-07-25 09:09:26.896533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:16328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.217 [2024-07-25 09:09:26.896571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:08.217 [2024-07-25 09:09:26.896613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:16336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.217 [2024-07-25 09:09:26.896637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:08.217 [2024-07-25 09:09:26.896669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:16344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.217 [2024-07-25 09:09:26.896691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:08.217 [2024-07-25 09:09:26.896722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:16352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.217 [2024-07-25 09:09:26.896743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:08.217 [2024-07-25 09:09:26.896773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:16360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.218 [2024-07-25 09:09:26.896794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:08.218 [2024-07-25 09:09:26.896824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:16368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.218 [2024-07-25 09:09:26.896857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:08.218 [2024-07-25 09:09:26.896892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.218 [2024-07-25 09:09:26.896914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:08.218 [2024-07-25 09:09:26.896944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:16384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.218 [2024-07-25 09:09:26.896983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:08.218 [2024-07-25 09:09:26.897016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.218 [2024-07-25 09:09:26.897037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:08.218 [2024-07-25 09:09:26.897067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.218 [2024-07-25 09:09:26.897087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:08.218 [2024-07-25 09:09:26.897126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:16408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.218 [2024-07-25 09:09:26.897146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:08.218 [2024-07-25 09:09:26.897176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:16416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.218 [2024-07-25 09:09:26.897196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:08.218 [2024-07-25 09:09:26.897226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.218 [2024-07-25 09:09:26.897247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:08.218 [2024-07-25 09:09:26.897277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.218 [2024-07-25 09:09:26.897298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:08.218 [2024-07-25 09:09:26.897330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:16440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.218 [2024-07-25 09:09:26.897350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:08.218 [2024-07-25 09:09:26.897379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.218 [2024-07-25 09:09:26.897400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:08.218 [2024-07-25 09:09:26.897430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:16456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.218 [2024-07-25 09:09:26.897452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:08.218 [2024-07-25 09:09:26.897482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:16464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.218 [2024-07-25 09:09:26.897502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:08.218 [2024-07-25 09:09:26.897533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:16472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.218 [2024-07-25 09:09:26.897554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:08.218 [2024-07-25 09:09:26.897594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:15880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.218 [2024-07-25 09:09:26.897622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:08.218 [2024-07-25 09:09:26.897654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:15888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.218 [2024-07-25 09:09:26.897675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:08.218 [2024-07-25 09:09:26.897706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.218 [2024-07-25 09:09:26.897726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:08.218 [2024-07-25 09:09:26.897756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:15904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.218 [2024-07-25 09:09:26.897777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:08.218 [2024-07-25 09:09:26.897807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:15912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.218 [2024-07-25 09:09:26.897840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:08.218 [2024-07-25 09:09:26.897872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:15920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.218 [2024-07-25 09:09:26.897894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:08.218 [2024-07-25 09:09:26.897925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:15928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.218 [2024-07-25 09:09:26.897946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:08.218 [2024-07-25 09:09:26.897975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:15936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.218 [2024-07-25 09:09:26.898003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:08.218 [2024-07-25 09:09:26.898034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:15944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.218 [2024-07-25 09:09:26.898055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:08.218 [2024-07-25 09:09:26.898084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:15952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.218 [2024-07-25 09:09:26.898105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:08.218 [2024-07-25 09:09:26.898135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.218 [2024-07-25 09:09:26.898156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:08.218 [2024-07-25 09:09:26.898185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:15968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.218 [2024-07-25 09:09:26.898206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:08.218 [2024-07-25 09:09:26.898236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:15976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.218 [2024-07-25 09:09:26.898257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:08.218 [2024-07-25 09:09:26.898295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:15984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.218 [2024-07-25 09:09:26.898326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:08.218 [2024-07-25 09:09:26.898357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:15992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.218 [2024-07-25 09:09:26.898378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:08.218 [2024-07-25 09:09:26.898408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:16000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.218 [2024-07-25 09:09:26.898430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:08.218 [2024-07-25 09:09:26.898465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:16480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.218 [2024-07-25 09:09:26.898486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:08.218 [2024-07-25 09:09:26.898516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.218 [2024-07-25 09:09:26.898537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:08.218 [2024-07-25 09:09:26.898566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:16496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.218 [2024-07-25 09:09:26.898587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:08.218 [2024-07-25 09:09:26.898617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:16504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.218 [2024-07-25 09:09:26.898638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:08.218 [2024-07-25 09:09:26.898668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:16512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.219 [2024-07-25 09:09:26.898689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:08.219 [2024-07-25 09:09:26.898725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:16520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.219 [2024-07-25 09:09:26.898748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:08.219 [2024-07-25 09:09:26.898777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:16528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.219 [2024-07-25 09:09:26.898798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:08.219 [2024-07-25 09:09:26.898844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:16536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.219 [2024-07-25 09:09:26.898868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:08.219 [2024-07-25 09:09:26.898898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.219 [2024-07-25 09:09:26.898919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:08.219 [2024-07-25 09:09:26.898957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:16552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.219 [2024-07-25 09:09:26.898979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:08.219 [2024-07-25 09:09:26.899009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:16560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.219 [2024-07-25 09:09:26.899047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:08.219 [2024-07-25 09:09:26.899095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.219 [2024-07-25 09:09:26.899118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:08.219 [2024-07-25 09:09:26.899147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:16576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.219 [2024-07-25 09:09:26.899168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:08.219 [2024-07-25 09:09:26.899198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.219 [2024-07-25 09:09:26.899225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:08.219 [2024-07-25 09:09:26.899255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:16016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.219 [2024-07-25 09:09:26.899276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:08.219 [2024-07-25 09:09:26.899305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:16024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.219 [2024-07-25 09:09:26.899326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.219 [2024-07-25 09:09:26.899357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:16032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.219 [2024-07-25 09:09:26.899377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.219 [2024-07-25 09:09:26.899407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:16040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.219 [2024-07-25 09:09:26.899428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:08.219 [2024-07-25 09:09:26.899462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:16048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.219 [2024-07-25 09:09:26.899483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:08.219 [2024-07-25 09:09:26.899513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:16056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.219 [2024-07-25 09:09:26.899534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:08.219 [2024-07-25 09:09:26.899563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:16064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.219 [2024-07-25 09:09:26.899584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:08.219 [2024-07-25 09:09:26.899613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:16072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.219 [2024-07-25 09:09:26.899643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:08.219 [2024-07-25 09:09:26.899674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:16080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.219 [2024-07-25 09:09:26.899695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:08.219 [2024-07-25 09:09:26.899725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:16088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.219 [2024-07-25 09:09:26.899745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:08.219 [2024-07-25 09:09:26.899775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:16096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.219 [2024-07-25 09:09:26.899796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:08.219 [2024-07-25 09:09:26.899840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:16104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.219 [2024-07-25 09:09:26.899888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:08.219 [2024-07-25 09:09:26.899922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:16112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.219 [2024-07-25 09:09:26.899943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:08.219 [2024-07-25 09:09:26.899978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:16120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.219 [2024-07-25 09:09:26.900000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:08.219 [2024-07-25 09:09:26.900030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:16128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.219 [2024-07-25 09:09:26.900051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:08.219 [2024-07-25 09:09:33.472694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:100000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.219 [2024-07-25 09:09:33.472780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:08.219 [2024-07-25 09:09:33.472950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:100008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.219 [2024-07-25 09:09:33.472986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:08.219 [2024-07-25 09:09:33.473032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:100016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.219 [2024-07-25 09:09:33.473055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:08.219 [2024-07-25 09:09:33.473089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:100024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.219 [2024-07-25 09:09:33.473110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:08.219 [2024-07-25 09:09:33.473140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:100032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.219 [2024-07-25 09:09:33.473196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:08.219 [2024-07-25 09:09:33.473228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:100040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.219 [2024-07-25 09:09:33.473258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:08.219 [2024-07-25 09:09:33.473289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:100048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.219 [2024-07-25 09:09:33.473311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:08.219 [2024-07-25 09:09:33.473341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.219 [2024-07-25 09:09:33.473361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:08.219 [2024-07-25 09:09:33.473391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:100064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.219 [2024-07-25 09:09:33.473411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:08.219 [2024-07-25 09:09:33.473446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:100072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.219 [2024-07-25 09:09:33.473465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:08.219 [2024-07-25 09:09:33.473496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:100080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.219 [2024-07-25 09:09:33.473516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:08.220 [2024-07-25 09:09:33.473545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:100088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.220 [2024-07-25 09:09:33.473565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:08.220 [2024-07-25 09:09:33.473595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:100096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.220 [2024-07-25 09:09:33.473615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:08.220 [2024-07-25 09:09:33.473645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:100104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.220 [2024-07-25 09:09:33.473666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:08.220 [2024-07-25 09:09:33.473695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:100112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.220 [2024-07-25 09:09:33.473715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:08.220 [2024-07-25 09:09:33.473760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:100120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.220 [2024-07-25 09:09:33.473779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:08.220 [2024-07-25 09:09:33.473810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:99552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.220 [2024-07-25 09:09:33.473829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:08.220 [2024-07-25 09:09:33.473904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:99560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.220 [2024-07-25 09:09:33.473928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:08.220 [2024-07-25 09:09:33.473959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:99568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.220 [2024-07-25 09:09:33.473979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:08.220 [2024-07-25 09:09:33.474009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:99576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.220 [2024-07-25 09:09:33.474029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:08.220 [2024-07-25 09:09:33.474059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:99584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.220 [2024-07-25 09:09:33.474090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:08.220 [2024-07-25 09:09:33.474120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:99592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.220 [2024-07-25 09:09:33.474140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:08.220 [2024-07-25 09:09:33.474169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:99600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.220 [2024-07-25 09:09:33.474189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:08.220 [2024-07-25 09:09:33.474219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:99608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.220 [2024-07-25 09:09:33.474239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:08.220 [2024-07-25 09:09:33.474269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:99616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.220 [2024-07-25 09:09:33.474290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:08.220 [2024-07-25 09:09:33.474319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:99624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.220 [2024-07-25 09:09:33.474339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:08.220 [2024-07-25 09:09:33.474369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:99632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.220 [2024-07-25 09:09:33.474389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.220 [2024-07-25 09:09:33.474419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.220 [2024-07-25 09:09:33.474439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.220 [2024-07-25 09:09:33.474469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:99648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.220 [2024-07-25 09:09:33.474489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:08.220 [2024-07-25 09:09:33.474531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.220 [2024-07-25 09:09:33.474553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:08.220 [2024-07-25 09:09:33.474583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.220 [2024-07-25 09:09:33.474603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:08.220 [2024-07-25 09:09:33.474634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:99672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.220 [2024-07-25 09:09:33.474655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:08.220 [2024-07-25 09:09:33.474692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:100128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.220 [2024-07-25 09:09:33.474729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:08.220 [2024-07-25 09:09:33.474771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:100136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.220 [2024-07-25 09:09:33.474792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:08.220 [2024-07-25 09:09:33.474822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.220 [2024-07-25 09:09:33.474853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:08.220 [2024-07-25 09:09:33.474886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:100152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.220 [2024-07-25 09:09:33.474906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:08.220 [2024-07-25 09:09:33.474935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:100160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.220 [2024-07-25 09:09:33.474955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:08.220 [2024-07-25 09:09:33.474984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:100168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.220 [2024-07-25 09:09:33.475004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:08.220 [2024-07-25 09:09:33.475032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:100176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.220 [2024-07-25 09:09:33.475052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:08.221 [2024-07-25 09:09:33.475080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:100184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.221 [2024-07-25 09:09:33.475100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:08.221 [2024-07-25 09:09:33.475130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:100192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.221 [2024-07-25 09:09:33.475151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:08.221 [2024-07-25 09:09:33.475196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:100200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.221 [2024-07-25 09:09:33.475226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:08.221 [2024-07-25 09:09:33.475258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:100208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.221 [2024-07-25 09:09:33.475279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:08.221 [2024-07-25 09:09:33.475309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:100216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.221 [2024-07-25 09:09:33.475329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:08.221 [2024-07-25 09:09:33.475359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.221 [2024-07-25 09:09:33.475380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:08.221 [2024-07-25 09:09:33.475410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:100232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.221 [2024-07-25 09:09:33.475448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:08.221 [2024-07-25 09:09:33.475479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:100240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.221 [2024-07-25 09:09:33.475501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:08.221 [2024-07-25 09:09:33.475532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:100248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.221 [2024-07-25 09:09:33.475552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:08.221 [2024-07-25 09:09:33.475582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:99680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.221 [2024-07-25 09:09:33.475618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:08.221 [2024-07-25 09:09:33.475664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.221 [2024-07-25 09:09:33.475700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:08.221 [2024-07-25 09:09:33.475728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:99696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.221 [2024-07-25 09:09:33.475748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:08.221 [2024-07-25 09:09:33.475777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:99704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.221 [2024-07-25 09:09:33.475797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:08.221 [2024-07-25 09:09:33.475843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:99712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.221 [2024-07-25 09:09:33.475887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:08.221 [2024-07-25 09:09:33.475922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:99720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.221 [2024-07-25 09:09:33.475961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:08.221 [2024-07-25 09:09:33.475995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:99728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.221 [2024-07-25 09:09:33.476016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:08.221 [2024-07-25 09:09:33.476046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:99736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.221 [2024-07-25 09:09:33.476066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:08.221 [2024-07-25 09:09:33.476096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:100256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.221 [2024-07-25 09:09:33.476117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:08.221 [2024-07-25 09:09:33.476147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:100264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.221 [2024-07-25 09:09:33.476167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:08.221 [2024-07-25 09:09:33.476197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:100272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.221 [2024-07-25 09:09:33.476217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:08.221 [2024-07-25 09:09:33.476247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:100280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.221 [2024-07-25 09:09:33.476269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:08.221 [2024-07-25 09:09:33.476299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:100288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.221 [2024-07-25 09:09:33.476325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:08.221 [2024-07-25 09:09:33.476354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:100296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.221 [2024-07-25 09:09:33.476374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:08.221 [2024-07-25 09:09:33.476404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:100304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.221 [2024-07-25 09:09:33.476424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:08.221 [2024-07-25 09:09:33.476453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.221 [2024-07-25 09:09:33.476473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:08.221 [2024-07-25 09:09:33.476503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:99744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.221 [2024-07-25 09:09:33.476523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:08.221 [2024-07-25 09:09:33.476554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:99752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.221 [2024-07-25 09:09:33.476591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:08.221 [2024-07-25 09:09:33.476633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:99760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.221 [2024-07-25 09:09:33.476654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:08.221 [2024-07-25 09:09:33.476684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:99768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.221 [2024-07-25 09:09:33.476704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:08.221 [2024-07-25 09:09:33.476734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:99776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.221 [2024-07-25 09:09:33.476755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:08.221 [2024-07-25 09:09:33.476784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:99784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.221 [2024-07-25 09:09:33.476804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:08.221 [2024-07-25 09:09:33.476851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:99792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.221 [2024-07-25 09:09:33.476873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:08.221 [2024-07-25 09:09:33.476903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:99800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.221 [2024-07-25 09:09:33.476923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:08.221 [2024-07-25 09:09:33.476953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:99808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.221 [2024-07-25 09:09:33.476980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:08.221 [2024-07-25 09:09:33.477010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:99816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.221 [2024-07-25 09:09:33.477030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:08.221 [2024-07-25 09:09:33.477059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:99824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.221 [2024-07-25 09:09:33.477086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:08.221 [2024-07-25 09:09:33.477117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:99832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.221 [2024-07-25 09:09:33.477137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:08.222 [2024-07-25 09:09:33.477167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:99840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.222 [2024-07-25 09:09:33.477187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:08.222 [2024-07-25 09:09:33.477217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:99848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.222 [2024-07-25 09:09:33.477237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:08.222 [2024-07-25 09:09:33.477275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:99856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.222 [2024-07-25 09:09:33.477296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:08.222 [2024-07-25 09:09:33.477328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:99864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.222 [2024-07-25 09:09:33.477349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:08.222 [2024-07-25 09:09:33.477393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:100320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.222 [2024-07-25 09:09:33.477415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:08.222 [2024-07-25 09:09:33.477446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:100328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.222 [2024-07-25 09:09:33.477466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:08.222 [2024-07-25 09:09:33.477503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:100336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.222 [2024-07-25 09:09:33.477524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:08.222 [2024-07-25 09:09:33.477558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:100344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.222 [2024-07-25 09:09:33.477579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:08.222 [2024-07-25 09:09:33.477609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:100352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.222 [2024-07-25 09:09:33.477629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:08.222 [2024-07-25 09:09:33.477657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:100360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.222 [2024-07-25 09:09:33.477678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:08.222 [2024-07-25 09:09:33.477707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:100368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.222 [2024-07-25 09:09:33.477728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:08.222 [2024-07-25 09:09:33.477757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:100376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.222 [2024-07-25 09:09:33.477778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:08.222 [2024-07-25 09:09:33.477807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.222 [2024-07-25 09:09:33.477840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:08.222 [2024-07-25 09:09:33.477872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:100392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.222 [2024-07-25 09:09:33.477894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:08.222 [2024-07-25 09:09:33.477933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:100400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.222 [2024-07-25 09:09:33.477955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:08.222 [2024-07-25 09:09:33.477984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:100408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.222 [2024-07-25 09:09:33.478005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:08.222 [2024-07-25 09:09:33.478035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:100416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.222 [2024-07-25 09:09:33.478055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:08.222 [2024-07-25 09:09:33.478098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:100424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.222 [2024-07-25 09:09:33.478118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:08.222 [2024-07-25 09:09:33.478147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:100432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.222 [2024-07-25 09:09:33.478168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:08.222 [2024-07-25 09:09:33.478198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:100440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.222 [2024-07-25 09:09:33.478218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:08.222 [2024-07-25 09:09:33.478248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:99872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.222 [2024-07-25 09:09:33.478268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:08.222 [2024-07-25 09:09:33.478298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:99880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.222 [2024-07-25 09:09:33.478318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:08.222 [2024-07-25 09:09:33.478353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:99888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.222 [2024-07-25 09:09:33.478373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:08.222 [2024-07-25 09:09:33.478403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:99896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.222 [2024-07-25 09:09:33.478423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:08.222 [2024-07-25 09:09:33.478453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:99904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.222 [2024-07-25 09:09:33.478473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:08.222 [2024-07-25 09:09:33.478503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.222 [2024-07-25 09:09:33.478523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:08.222 [2024-07-25 09:09:33.478555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:99920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.222 [2024-07-25 09:09:33.478582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:08.222 [2024-07-25 09:09:33.478614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:99928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.222 [2024-07-25 09:09:33.478635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:08.222 [2024-07-25 09:09:33.478664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.222 [2024-07-25 09:09:33.478685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:08.222 [2024-07-25 09:09:33.478715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:99944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.222 [2024-07-25 09:09:33.478735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:08.222 [2024-07-25 09:09:33.478764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:99952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.222 [2024-07-25 09:09:33.478784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:08.222 [2024-07-25 09:09:33.478826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:99960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.222 [2024-07-25 09:09:33.478850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:08.222 [2024-07-25 09:09:33.478881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:99968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.222 [2024-07-25 09:09:33.478902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:08.222 [2024-07-25 09:09:33.478932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:99976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.222 [2024-07-25 09:09:33.478969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:08.222 [2024-07-25 09:09:33.479001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.222 [2024-07-25 09:09:33.479023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:08.222 [2024-07-25 09:09:33.480053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.222 [2024-07-25 09:09:33.480090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:08.222 [2024-07-25 09:09:33.480141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:100448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.223 [2024-07-25 09:09:33.480166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:08.223 [2024-07-25 09:09:33.480208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:100456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.223 [2024-07-25 09:09:33.480231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:08.223 [2024-07-25 09:09:33.480274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:100464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.223 [2024-07-25 09:09:33.480308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:08.223 [2024-07-25 09:09:33.480350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:100472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.223 [2024-07-25 09:09:33.480374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:08.223 [2024-07-25 09:09:33.480415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:100480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.223 [2024-07-25 09:09:33.480436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:08.223 [2024-07-25 09:09:33.480476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:100488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.223 [2024-07-25 09:09:33.480498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:08.223 [2024-07-25 09:09:33.480547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:100496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.223 [2024-07-25 09:09:33.480568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:08.223 [2024-07-25 09:09:33.480631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:100504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.223 [2024-07-25 09:09:33.480657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:08.223 [2024-07-25 09:09:33.480700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.223 [2024-07-25 09:09:33.480722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:08.223 [2024-07-25 09:09:33.480763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:100520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.223 [2024-07-25 09:09:33.480784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:08.223 [2024-07-25 09:09:33.480838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:100528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.223 [2024-07-25 09:09:33.480862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:08.223 [2024-07-25 09:09:33.480903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:100536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.223 [2024-07-25 09:09:33.480924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:08.223 [2024-07-25 09:09:33.480964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.223 [2024-07-25 09:09:33.480999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:08.223 [2024-07-25 09:09:33.481040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:100552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.223 [2024-07-25 09:09:33.481061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:08.223 [2024-07-25 09:09:33.481100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:100560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.223 [2024-07-25 09:09:33.481122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:08.223 [2024-07-25 09:09:33.481176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:100568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.223 [2024-07-25 09:09:33.481199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:08.223 [2024-07-25 09:09:40.552152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:16040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.223 [2024-07-25 09:09:40.552276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:08.223 [2024-07-25 09:09:40.552384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:16048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.223 [2024-07-25 09:09:40.552418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:08.223 [2024-07-25 09:09:40.552455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:16056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.223 [2024-07-25 09:09:40.552477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:08.223 [2024-07-25 09:09:40.552509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:16064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.223 [2024-07-25 09:09:40.552530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:08.223 [2024-07-25 09:09:40.552570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:16072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.223 [2024-07-25 09:09:40.552591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:08.223 [2024-07-25 09:09:40.552622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:16080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.223 [2024-07-25 09:09:40.552643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:08.223 [2024-07-25 09:09:40.552673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.223 [2024-07-25 09:09:40.552694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:08.223 [2024-07-25 09:09:40.552724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.223 [2024-07-25 09:09:40.552745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:08.223 [2024-07-25 09:09:40.552775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:16104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.223 [2024-07-25 09:09:40.552807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:08.223 [2024-07-25 09:09:40.552855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:16112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.223 [2024-07-25 09:09:40.552881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:08.223 [2024-07-25 09:09:40.552913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:16120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.223 [2024-07-25 09:09:40.552935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:08.223 [2024-07-25 09:09:40.552986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:16128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.223 [2024-07-25 09:09:40.553009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:08.223 [2024-07-25 09:09:40.553039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:16136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.223 [2024-07-25 09:09:40.553060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:08.223 [2024-07-25 09:09:40.553091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:16144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.223 [2024-07-25 09:09:40.553112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:08.223 [2024-07-25 09:09:40.553142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:16152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.223 [2024-07-25 09:09:40.553162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:08.223 [2024-07-25 09:09:40.553202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:16160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.223 [2024-07-25 09:09:40.553223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:08.223 [2024-07-25 09:09:40.553254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:15592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.223 [2024-07-25 09:09:40.553274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:08.223 [2024-07-25 09:09:40.553305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:15600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.223 [2024-07-25 09:09:40.553325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:08.223 [2024-07-25 09:09:40.553356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:15608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.223 [2024-07-25 09:09:40.553376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:08.223 [2024-07-25 09:09:40.553407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:15616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.223 [2024-07-25 09:09:40.553427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:08.223 [2024-07-25 09:09:40.553457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:15624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.223 [2024-07-25 09:09:40.553478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:08.224 [2024-07-25 09:09:40.553508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.224 [2024-07-25 09:09:40.553530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:08.224 [2024-07-25 09:09:40.553569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:15640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.224 [2024-07-25 09:09:40.553590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:08.224 [2024-07-25 09:09:40.553620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:15648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.224 [2024-07-25 09:09:40.553650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:08.224 [2024-07-25 09:09:40.553683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:15656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.224 [2024-07-25 09:09:40.553704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:08.224 [2024-07-25 09:09:40.553734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.224 [2024-07-25 09:09:40.553755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:08.224 [2024-07-25 09:09:40.553786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:15672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.224 [2024-07-25 09:09:40.553807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:08.224 [2024-07-25 09:09:40.553854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:15680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.224 [2024-07-25 09:09:40.553877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:08.224 [2024-07-25 09:09:40.553911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:15688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.224 [2024-07-25 09:09:40.553933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:08.224 [2024-07-25 09:09:40.553964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:15696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.224 [2024-07-25 09:09:40.553985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:08.224 [2024-07-25 09:09:40.554016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:15704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.224 [2024-07-25 09:09:40.554036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:08.224 [2024-07-25 09:09:40.554067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.224 [2024-07-25 09:09:40.554088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:08.224 [2024-07-25 09:09:40.554127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:16168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.224 [2024-07-25 09:09:40.554149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:08.224 [2024-07-25 09:09:40.554180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.224 [2024-07-25 09:09:40.554201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:08.224 [2024-07-25 09:09:40.554232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:16184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.224 [2024-07-25 09:09:40.554253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:08.224 [2024-07-25 09:09:40.554284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.224 [2024-07-25 09:09:40.554313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:08.224 [2024-07-25 09:09:40.554346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.224 [2024-07-25 09:09:40.554367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:08.224 [2024-07-25 09:09:40.554398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:16208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.224 [2024-07-25 09:09:40.554419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:08.224 [2024-07-25 09:09:40.554449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:16216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.224 [2024-07-25 09:09:40.554471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:08.224 [2024-07-25 09:09:40.554503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:16224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.224 [2024-07-25 09:09:40.554524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:08.224 [2024-07-25 09:09:40.554554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:16232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.224 [2024-07-25 09:09:40.554575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:08.224 [2024-07-25 09:09:40.554606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:16240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.224 [2024-07-25 09:09:40.554627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:08.224 [2024-07-25 09:09:40.554658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.224 [2024-07-25 09:09:40.554678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:08.224 [2024-07-25 09:09:40.554709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:16256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.224 [2024-07-25 09:09:40.554730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:08.224 [2024-07-25 09:09:40.554760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:16264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.224 [2024-07-25 09:09:40.554781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:08.224 [2024-07-25 09:09:40.554825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:16272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.224 [2024-07-25 09:09:40.554867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:08.224 [2024-07-25 09:09:40.554902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:16280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.224 [2024-07-25 09:09:40.554924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:08.224 [2024-07-25 09:09:40.554955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:16288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.224 [2024-07-25 09:09:40.554976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:08.224 [2024-07-25 09:09:40.555017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:15720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.224 [2024-07-25 09:09:40.555039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:08.224 [2024-07-25 09:09:40.555069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:15728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.224 [2024-07-25 09:09:40.555091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:08.224 [2024-07-25 09:09:40.555121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:15736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.224 [2024-07-25 09:09:40.555142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:08.224 [2024-07-25 09:09:40.555172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:15744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.224 [2024-07-25 09:09:40.555193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:08.224 [2024-07-25 09:09:40.555224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:15752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.224 [2024-07-25 09:09:40.555244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:08.224 [2024-07-25 09:09:40.555275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.224 [2024-07-25 09:09:40.555295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:08.224 [2024-07-25 09:09:40.555326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.224 [2024-07-25 09:09:40.555347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:08.224 [2024-07-25 09:09:40.555378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.224 [2024-07-25 09:09:40.555399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:08.224 [2024-07-25 09:09:40.555429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.224 [2024-07-25 09:09:40.555450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:08.224 [2024-07-25 09:09:40.555481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.225 [2024-07-25 09:09:40.555502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:08.225 [2024-07-25 09:09:40.555534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:16312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.225 [2024-07-25 09:09:40.555555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:08.225 [2024-07-25 09:09:40.555586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:16320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.225 [2024-07-25 09:09:40.555607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:08.225 [2024-07-25 09:09:40.555649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.225 [2024-07-25 09:09:40.555671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:08.225 [2024-07-25 09:09:40.555702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:16336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.225 [2024-07-25 09:09:40.555724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:08.225 [2024-07-25 09:09:40.555754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:16344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.225 [2024-07-25 09:09:40.555775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:08.225 [2024-07-25 09:09:40.555806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:16352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.225 [2024-07-25 09:09:40.555842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:08.225 [2024-07-25 09:09:40.555896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:15784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.225 [2024-07-25 09:09:40.555920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:08.225 [2024-07-25 09:09:40.555951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:15792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.225 [2024-07-25 09:09:40.555972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:08.225 [2024-07-25 09:09:40.556002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:15800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.225 [2024-07-25 09:09:40.556033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:08.225 [2024-07-25 09:09:40.556063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:15808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.225 [2024-07-25 09:09:40.556083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:08.225 [2024-07-25 09:09:40.556114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:15816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.225 [2024-07-25 09:09:40.556135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:08.225 [2024-07-25 09:09:40.556165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:15824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.225 [2024-07-25 09:09:40.556186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:08.225 [2024-07-25 09:09:40.556216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:15832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.225 [2024-07-25 09:09:40.556238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:08.225 [2024-07-25 09:09:40.556269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:15840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.225 [2024-07-25 09:09:40.556289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:08.225 [2024-07-25 09:09:40.556328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:15848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.225 [2024-07-25 09:09:40.556350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:08.225 [2024-07-25 09:09:40.556381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.225 [2024-07-25 09:09:40.556402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:08.225 [2024-07-25 09:09:40.556433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:15864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.225 [2024-07-25 09:09:40.556454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:08.225 [2024-07-25 09:09:40.556485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.225 [2024-07-25 09:09:40.556507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:08.225 [2024-07-25 09:09:40.556537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:15880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.225 [2024-07-25 09:09:40.556564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:08.225 [2024-07-25 09:09:40.556601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:15888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.225 [2024-07-25 09:09:40.556621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:08.225 [2024-07-25 09:09:40.556651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:15896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.225 [2024-07-25 09:09:40.556672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:08.225 [2024-07-25 09:09:40.556703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:15904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.225 [2024-07-25 09:09:40.556724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:08.225 [2024-07-25 09:09:40.556760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:16360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.225 [2024-07-25 09:09:40.556783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:08.225 [2024-07-25 09:09:40.556827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:16368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.225 [2024-07-25 09:09:40.556851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:08.225 [2024-07-25 09:09:40.556883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:16376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.225 [2024-07-25 09:09:40.556904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:08.225 [2024-07-25 09:09:40.556935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:16384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.225 [2024-07-25 09:09:40.556956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:08.225 [2024-07-25 09:09:40.556987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:16392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.225 [2024-07-25 09:09:40.557017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:08.225 [2024-07-25 09:09:40.557049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:16400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.225 [2024-07-25 09:09:40.557070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:08.225 [2024-07-25 09:09:40.557101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:16408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.225 [2024-07-25 09:09:40.557122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:08.225 [2024-07-25 09:09:40.557153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.225 [2024-07-25 09:09:40.557179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:08.225 [2024-07-25 09:09:40.557210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.226 [2024-07-25 09:09:40.557231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:08.226 [2024-07-25 09:09:40.557262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.226 [2024-07-25 09:09:40.557283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:08.226 [2024-07-25 09:09:40.557314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:16440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.226 [2024-07-25 09:09:40.557335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:08.226 [2024-07-25 09:09:40.557366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:16448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.226 [2024-07-25 09:09:40.557387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:08.226 [2024-07-25 09:09:40.557418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:16456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.226 [2024-07-25 09:09:40.557439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:08.226 [2024-07-25 09:09:40.557470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:16464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.226 [2024-07-25 09:09:40.557530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:08.226 [2024-07-25 09:09:40.557563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:16472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.226 [2024-07-25 09:09:40.557585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:08.226 [2024-07-25 09:09:40.557616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:16480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.226 [2024-07-25 09:09:40.557637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:08.226 [2024-07-25 09:09:40.557668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:15912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.226 [2024-07-25 09:09:40.557698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:08.226 [2024-07-25 09:09:40.557731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.226 [2024-07-25 09:09:40.557752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:08.226 [2024-07-25 09:09:40.557782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:15928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.226 [2024-07-25 09:09:40.557803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:08.226 [2024-07-25 09:09:40.557852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:15936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.226 [2024-07-25 09:09:40.557875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:08.226 [2024-07-25 09:09:40.557906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.226 [2024-07-25 09:09:40.557927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:08.226 [2024-07-25 09:09:40.557957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:15952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.226 [2024-07-25 09:09:40.557978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:08.226 [2024-07-25 09:09:40.558009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:15960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.226 [2024-07-25 09:09:40.558029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:08.226 [2024-07-25 09:09:40.558059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:15968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.226 [2024-07-25 09:09:40.558080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:08.226 [2024-07-25 09:09:40.558111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.226 [2024-07-25 09:09:40.558131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:08.226 [2024-07-25 09:09:40.558162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:15984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.226 [2024-07-25 09:09:40.558183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:08.226 [2024-07-25 09:09:40.558214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:15992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.226 [2024-07-25 09:09:40.558235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:08.226 [2024-07-25 09:09:40.558266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:16000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.226 [2024-07-25 09:09:40.558287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:08.226 [2024-07-25 09:09:40.558318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:16008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.226 [2024-07-25 09:09:40.558338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:08.226 [2024-07-25 09:09:40.558379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:16016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.226 [2024-07-25 09:09:40.558416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:08.226 [2024-07-25 09:09:40.558450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:16024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.226 [2024-07-25 09:09:40.558482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:08.226 [2024-07-25 09:09:40.559461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:16032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.226 [2024-07-25 09:09:40.559497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:08.226 [2024-07-25 09:09:40.559549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:16488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.226 [2024-07-25 09:09:40.559574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.226 [2024-07-25 09:09:40.559617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:16496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.226 [2024-07-25 09:09:40.559640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.226 [2024-07-25 09:09:40.559681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:16504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.226 [2024-07-25 09:09:40.559703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:08.226 [2024-07-25 09:09:40.559744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:16512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.226 [2024-07-25 09:09:40.559766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:08.226 [2024-07-25 09:09:40.559807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:16520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.226 [2024-07-25 09:09:40.559845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:08.226 [2024-07-25 09:09:40.559901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.226 [2024-07-25 09:09:40.559926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:08.226 [2024-07-25 09:09:40.559968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.226 [2024-07-25 09:09:40.559990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:08.226 [2024-07-25 09:09:40.560057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:16544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.226 [2024-07-25 09:09:40.560093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:08.226 [2024-07-25 09:09:40.560136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:16552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.226 [2024-07-25 09:09:40.560159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:08.226 [2024-07-25 09:09:40.560220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:16560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.226 [2024-07-25 09:09:40.560244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:08.226 [2024-07-25 09:09:40.560286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:16568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.226 [2024-07-25 09:09:40.560308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:08.226 [2024-07-25 09:09:40.560349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.226 [2024-07-25 09:09:40.560371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:08.226 [2024-07-25 09:09:40.560412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.226 [2024-07-25 09:09:40.560434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:08.226 [2024-07-25 09:09:40.560475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:16592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.227 [2024-07-25 09:09:40.560496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:08.227 [2024-07-25 09:09:40.560539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:16600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.227 [2024-07-25 09:09:40.560560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:08.227 [2024-07-25 09:09:40.560606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:16608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.227 [2024-07-25 09:09:40.560629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:08.227 [2024-07-25 09:09:53.911648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:69920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.227 [2024-07-25 09:09:53.911754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:08.227 [2024-07-25 09:09:53.911883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:69928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.227 [2024-07-25 09:09:53.911918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:08.227 [2024-07-25 09:09:53.911954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:69936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.227 [2024-07-25 09:09:53.911977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:08.227 [2024-07-25 09:09:53.912008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:69944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.227 [2024-07-25 09:09:53.912029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:08.227 [2024-07-25 09:09:53.912059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:69952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.227 [2024-07-25 09:09:53.912080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:08.227 [2024-07-25 09:09:53.912110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:69960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.227 [2024-07-25 09:09:53.912151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:08.227 [2024-07-25 09:09:53.912183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:69968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.227 [2024-07-25 09:09:53.912204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:08.227 [2024-07-25 09:09:53.912234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:69976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.227 [2024-07-25 09:09:53.912255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:08.227 [2024-07-25 09:09:53.912293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:69984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.227 [2024-07-25 09:09:53.912314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:08.227 [2024-07-25 09:09:53.912344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:69992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.227 [2024-07-25 09:09:53.912364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:08.227 [2024-07-25 09:09:53.912394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:70000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.227 [2024-07-25 09:09:53.912415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:08.227 [2024-07-25 09:09:53.912443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:70008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.227 [2024-07-25 09:09:53.912464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:08.227 [2024-07-25 09:09:53.912493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:70016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.227 [2024-07-25 09:09:53.912514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:08.227 [2024-07-25 09:09:53.912543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:70024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.227 [2024-07-25 09:09:53.912563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:08.227 [2024-07-25 09:09:53.912593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:70032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.227 [2024-07-25 09:09:53.912614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:08.227 [2024-07-25 09:09:53.912644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:70040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.227 [2024-07-25 09:09:53.912664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:08.227 [2024-07-25 09:09:53.912694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:69472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.227 [2024-07-25 09:09:53.912714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:08.227 [2024-07-25 09:09:53.912747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:69480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.227 [2024-07-25 09:09:53.912776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:08.227 [2024-07-25 09:09:53.912809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:69488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.227 [2024-07-25 09:09:53.912850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:08.227 [2024-07-25 09:09:53.912883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:69496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.227 [2024-07-25 09:09:53.912905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:08.227 [2024-07-25 09:09:53.912935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:69504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.227 [2024-07-25 09:09:53.912956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:08.227 [2024-07-25 09:09:53.912985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:69512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.227 [2024-07-25 09:09:53.913006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:08.227 [2024-07-25 09:09:53.913035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:69520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.227 [2024-07-25 09:09:53.913055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:08.227 [2024-07-25 09:09:53.913085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:69528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.227 [2024-07-25 09:09:53.913106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:08.227 [2024-07-25 09:09:53.913136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:69536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.227 [2024-07-25 09:09:53.913156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:08.227 [2024-07-25 09:09:53.913186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:69544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.227 [2024-07-25 09:09:53.913207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:08.227 [2024-07-25 09:09:53.913237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:69552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.227 [2024-07-25 09:09:53.913258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:08.227 [2024-07-25 09:09:53.913287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:69560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.227 [2024-07-25 09:09:53.913307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:08.227 [2024-07-25 09:09:53.913337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:69568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.227 [2024-07-25 09:09:53.913358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:08.227 [2024-07-25 09:09:53.913387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:69576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.227 [2024-07-25 09:09:53.913408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:08.227 [2024-07-25 09:09:53.913446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:69584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.227 [2024-07-25 09:09:53.913468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:08.227 [2024-07-25 09:09:53.913499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:69592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.227 [2024-07-25 09:09:53.913520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:08.227 [2024-07-25 09:09:53.913598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:70048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.227 [2024-07-25 09:09:53.913627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.227 [2024-07-25 09:09:53.913663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:70056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.227 [2024-07-25 09:09:53.913684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.228 [2024-07-25 09:09:53.913705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:70064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.228 [2024-07-25 09:09:53.913725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.228 [2024-07-25 09:09:53.913745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:70072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.228 [2024-07-25 09:09:53.913764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.228 [2024-07-25 09:09:53.913785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:70080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.228 [2024-07-25 09:09:53.913804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.228 [2024-07-25 09:09:53.913842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:70088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.228 [2024-07-25 09:09:53.913863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.228 [2024-07-25 09:09:53.913885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:70096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.228 [2024-07-25 09:09:53.913903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.228 [2024-07-25 09:09:53.913924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:70104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.228 [2024-07-25 09:09:53.913942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.228 [2024-07-25 09:09:53.913975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:69600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.228 [2024-07-25 09:09:53.913994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.228 [2024-07-25 09:09:53.914015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:69608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.228 [2024-07-25 09:09:53.914034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.228 [2024-07-25 09:09:53.914054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:69616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.228 [2024-07-25 09:09:53.914083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.228 [2024-07-25 09:09:53.914105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:69624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.228 [2024-07-25 09:09:53.914124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.228 [2024-07-25 09:09:53.914145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:69632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.228 [2024-07-25 09:09:53.914163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.228 [2024-07-25 09:09:53.914183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:69640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.228 [2024-07-25 09:09:53.914220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.228 [2024-07-25 09:09:53.914242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:69648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.228 [2024-07-25 09:09:53.914261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.228 [2024-07-25 09:09:53.914282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:69656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.228 [2024-07-25 09:09:53.914301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.228 [2024-07-25 09:09:53.914322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:70112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.228 [2024-07-25 09:09:53.914341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.228 [2024-07-25 09:09:53.914363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:70120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.228 [2024-07-25 09:09:53.914383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.228 [2024-07-25 09:09:53.914403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:70128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.228 [2024-07-25 09:09:53.914422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.228 [2024-07-25 09:09:53.914443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:70136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.228 [2024-07-25 09:09:53.914462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.228 [2024-07-25 09:09:53.914482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:70144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.228 [2024-07-25 09:09:53.914501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.228 [2024-07-25 09:09:53.914521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:70152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.228 [2024-07-25 09:09:53.914540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.228 [2024-07-25 09:09:53.914560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:70160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.228 [2024-07-25 09:09:53.914579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.228 [2024-07-25 09:09:53.914607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:70168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.228 [2024-07-25 09:09:53.914627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.228 [2024-07-25 09:09:53.914649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:70176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.228 [2024-07-25 09:09:53.914667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.228 [2024-07-25 09:09:53.914688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:70184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.228 [2024-07-25 09:09:53.914707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.228 [2024-07-25 09:09:53.914728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:70192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.228 [2024-07-25 09:09:53.914747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.228 [2024-07-25 09:09:53.914768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:70200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.228 [2024-07-25 09:09:53.914787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.228 [2024-07-25 09:09:53.914808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:70208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.228 [2024-07-25 09:09:53.914842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.228 [2024-07-25 09:09:53.914865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:70216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.228 [2024-07-25 09:09:53.914884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.228 [2024-07-25 09:09:53.914905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.228 [2024-07-25 09:09:53.914924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.228 [2024-07-25 09:09:53.914945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:70232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.228 [2024-07-25 09:09:53.914964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.228 [2024-07-25 09:09:53.914985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:69664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.228 [2024-07-25 09:09:53.915004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.228 [2024-07-25 09:09:53.915024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:69672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.228 [2024-07-25 09:09:53.915044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.228 [2024-07-25 09:09:53.915064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:69680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.228 [2024-07-25 09:09:53.915083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.228 [2024-07-25 09:09:53.915104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:69688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.228 [2024-07-25 09:09:53.915130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.228 [2024-07-25 09:09:53.915151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:69696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.228 [2024-07-25 09:09:53.915170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.228 [2024-07-25 09:09:53.915191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:69704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.228 [2024-07-25 09:09:53.915211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.228 [2024-07-25 09:09:53.915231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:69712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.228 [2024-07-25 09:09:53.915250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.228 [2024-07-25 09:09:53.915270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:69720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.229 [2024-07-25 09:09:53.915289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.229 [2024-07-25 09:09:53.915310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:69728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.229 [2024-07-25 09:09:53.915328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.229 [2024-07-25 09:09:53.915349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:69736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.229 [2024-07-25 09:09:53.915368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.229 [2024-07-25 09:09:53.915389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:69744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.229 [2024-07-25 09:09:53.915408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.229 [2024-07-25 09:09:53.915428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:69752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.229 [2024-07-25 09:09:53.915448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.229 [2024-07-25 09:09:53.915469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:69760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.229 [2024-07-25 09:09:53.915488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.229 [2024-07-25 09:09:53.915509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:69768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.229 [2024-07-25 09:09:53.915528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.229 [2024-07-25 09:09:53.915548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:69776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.229 [2024-07-25 09:09:53.915567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.229 [2024-07-25 09:09:53.915588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:69784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.229 [2024-07-25 09:09:53.915607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.229 [2024-07-25 09:09:53.915628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:70240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.229 [2024-07-25 09:09:53.915659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.229 [2024-07-25 09:09:53.915681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:70248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.229 [2024-07-25 09:09:53.915700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.229 [2024-07-25 09:09:53.915722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:70256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.229 [2024-07-25 09:09:53.915741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.229 [2024-07-25 09:09:53.915761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:70264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.229 [2024-07-25 09:09:53.915780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.229 [2024-07-25 09:09:53.915801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:70272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.229 [2024-07-25 09:09:53.915832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.229 [2024-07-25 09:09:53.915865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:70280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.229 [2024-07-25 09:09:53.915896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.229 [2024-07-25 09:09:53.915916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:70288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.229 [2024-07-25 09:09:53.915935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.229 [2024-07-25 09:09:53.915956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:70296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.229 [2024-07-25 09:09:53.915975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.229 [2024-07-25 09:09:53.915995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:70304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.229 [2024-07-25 09:09:53.916014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.229 [2024-07-25 09:09:53.916035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:70312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.229 [2024-07-25 09:09:53.916054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.229 [2024-07-25 09:09:53.916075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:70320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.229 [2024-07-25 09:09:53.916094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.229 [2024-07-25 09:09:53.916115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:70328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.229 [2024-07-25 09:09:53.916134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.229 [2024-07-25 09:09:53.916154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:70336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.229 [2024-07-25 09:09:53.916173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.229 [2024-07-25 09:09:53.916202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:70344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.229 [2024-07-25 09:09:53.916231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.229 [2024-07-25 09:09:53.916252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:70352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.229 [2024-07-25 09:09:53.916271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.229 [2024-07-25 09:09:53.916301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:70360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.229 [2024-07-25 09:09:53.916321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.229 [2024-07-25 09:09:53.916343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:69792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.229 [2024-07-25 09:09:53.916362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.229 [2024-07-25 09:09:53.916383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:69800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.229 [2024-07-25 09:09:53.916401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.229 [2024-07-25 09:09:53.916422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:69808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.229 [2024-07-25 09:09:53.916441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.229 [2024-07-25 09:09:53.916461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:69816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.229 [2024-07-25 09:09:53.916481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.229 [2024-07-25 09:09:53.916501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:69824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.229 [2024-07-25 09:09:53.916520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.229 [2024-07-25 09:09:53.916541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:69832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.229 [2024-07-25 09:09:53.916560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.229 [2024-07-25 09:09:53.916580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:69840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.229 [2024-07-25 09:09:53.916599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.229 [2024-07-25 09:09:53.916620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:69848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.229 [2024-07-25 09:09:53.916639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.229 [2024-07-25 09:09:53.916660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:69856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.229 [2024-07-25 09:09:53.916679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.229 [2024-07-25 09:09:53.916700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:69864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.229 [2024-07-25 09:09:53.916725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.229 [2024-07-25 09:09:53.916747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:69872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.229 [2024-07-25 09:09:53.916766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.229 [2024-07-25 09:09:53.916786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:69880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.229 [2024-07-25 09:09:53.916805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.229 [2024-07-25 09:09:53.916838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:69888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.229 [2024-07-25 09:09:53.916859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.230 [2024-07-25 09:09:53.916880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:69896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.230 [2024-07-25 09:09:53.916917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.230 [2024-07-25 09:09:53.916939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:69904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.230 [2024-07-25 09:09:53.916959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.230 [2024-07-25 09:09:53.916985] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b500 is same with the state(5) to be set 00:28:08.230 [2024-07-25 09:09:53.917009] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:08.230 [2024-07-25 09:09:53.917025] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:08.230 [2024-07-25 09:09:53.917042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:69912 len:8 PRP1 0x0 PRP2 0x0 00:28:08.230 [2024-07-25 09:09:53.917061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.230 [2024-07-25 09:09:53.917082] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:08.230 [2024-07-25 09:09:53.917096] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:08.230 [2024-07-25 09:09:53.917111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70368 len:8 PRP1 0x0 PRP2 0x0 00:28:08.230 [2024-07-25 09:09:53.917129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.230 [2024-07-25 09:09:53.917146] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:08.230 [2024-07-25 09:09:53.917160] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:08.230 [2024-07-25 09:09:53.917185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70376 len:8 PRP1 0x0 PRP2 0x0 00:28:08.230 [2024-07-25 09:09:53.917203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.230 [2024-07-25 09:09:53.917221] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:08.230 [2024-07-25 09:09:53.917235] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:08.230 [2024-07-25 09:09:53.917250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70384 len:8 PRP1 0x0 PRP2 0x0 00:28:08.230 [2024-07-25 09:09:53.917268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.230 [2024-07-25 09:09:53.917295] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:08.230 [2024-07-25 09:09:53.917310] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:08.230 [2024-07-25 09:09:53.917325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70392 len:8 PRP1 0x0 PRP2 0x0 00:28:08.230 [2024-07-25 09:09:53.917343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.230 [2024-07-25 09:09:53.917360] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:08.230 [2024-07-25 09:09:53.917374] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:08.230 [2024-07-25 09:09:53.917390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70400 len:8 PRP1 0x0 PRP2 0x0 00:28:08.230 [2024-07-25 09:09:53.917408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.230 [2024-07-25 09:09:53.917425] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:08.230 [2024-07-25 09:09:53.917439] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:08.230 [2024-07-25 09:09:53.917453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70408 len:8 PRP1 0x0 PRP2 0x0 00:28:08.230 [2024-07-25 09:09:53.917471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.230 [2024-07-25 09:09:53.917488] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:08.230 [2024-07-25 09:09:53.917502] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:08.230 [2024-07-25 09:09:53.917517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70416 len:8 PRP1 0x0 PRP2 0x0 00:28:08.230 [2024-07-25 09:09:53.917540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.230 [2024-07-25 09:09:53.917559] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:08.230 [2024-07-25 09:09:53.917573] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:08.230 [2024-07-25 09:09:53.917594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70424 len:8 PRP1 0x0 PRP2 0x0 00:28:08.230 [2024-07-25 09:09:53.917613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.230 [2024-07-25 09:09:53.917638] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:08.230 [2024-07-25 09:09:53.917659] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:08.230 [2024-07-25 09:09:53.917674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70432 len:8 PRP1 0x0 PRP2 0x0 00:28:08.230 [2024-07-25 09:09:53.917692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.230 [2024-07-25 09:09:53.917710] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:08.230 [2024-07-25 09:09:53.917724] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:08.230 [2024-07-25 09:09:53.917739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70440 len:8 PRP1 0x0 PRP2 0x0 00:28:08.230 [2024-07-25 09:09:53.917757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.230 [2024-07-25 09:09:53.917774] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:08.230 [2024-07-25 09:09:53.917788] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:08.230 [2024-07-25 09:09:53.917803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70448 len:8 PRP1 0x0 PRP2 0x0 00:28:08.230 [2024-07-25 09:09:53.917842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.230 [2024-07-25 09:09:53.917863] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:08.230 [2024-07-25 09:09:53.917877] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:08.230 [2024-07-25 09:09:53.917892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70456 len:8 PRP1 0x0 PRP2 0x0 00:28:08.230 [2024-07-25 09:09:53.917910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.230 [2024-07-25 09:09:53.917928] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:08.230 [2024-07-25 09:09:53.917943] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:08.230 [2024-07-25 09:09:53.917957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70464 len:8 PRP1 0x0 PRP2 0x0 00:28:08.230 [2024-07-25 09:09:53.917984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.230 [2024-07-25 09:09:53.918001] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:08.230 [2024-07-25 09:09:53.918015] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:08.230 [2024-07-25 09:09:53.918030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70472 len:8 PRP1 0x0 PRP2 0x0 00:28:08.230 [2024-07-25 09:09:53.918048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.230 [2024-07-25 09:09:53.918065] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:08.230 [2024-07-25 09:09:53.918078] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:08.230 [2024-07-25 09:09:53.918093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70480 len:8 PRP1 0x0 PRP2 0x0 00:28:08.230 [2024-07-25 09:09:53.918116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.230 [2024-07-25 09:09:53.918134] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:08.230 [2024-07-25 09:09:53.918147] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:08.230 [2024-07-25 09:09:53.918162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70488 len:8 PRP1 0x0 PRP2 0x0 00:28:08.230 [2024-07-25 09:09:53.918180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.230 [2024-07-25 09:09:53.918445] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002b500 was disconnected and freed. reset controller. 00:28:08.231 [2024-07-25 09:09:53.918607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:08.231 [2024-07-25 09:09:53.918640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.231 [2024-07-25 09:09:53.918682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:08.231 [2024-07-25 09:09:53.918702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.231 [2024-07-25 09:09:53.918721] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:08.231 [2024-07-25 09:09:53.918740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.231 [2024-07-25 09:09:53.918758] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:08.231 [2024-07-25 09:09:53.918782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.231 [2024-07-25 09:09:53.918808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.231 [2024-07-25 09:09:53.918845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.231 [2024-07-25 09:09:53.918874] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ad80 is same with the state(5) to be set 00:28:08.231 [2024-07-25 09:09:53.920358] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:08.231 [2024-07-25 09:09:53.920417] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:28:08.231 [2024-07-25 09:09:53.920831] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.231 [2024-07-25 09:09:53.920874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ad80 with addr=10.0.0.2, port=4421 00:28:08.231 [2024-07-25 09:09:53.920898] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ad80 is same with the state(5) to be set 00:28:08.231 [2024-07-25 09:09:53.921106] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:28:08.231 [2024-07-25 09:09:53.921202] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:08.231 [2024-07-25 09:09:53.921231] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:08.231 [2024-07-25 09:09:53.921258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:08.231 [2024-07-25 09:09:53.921318] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:08.231 [2024-07-25 09:09:53.921344] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:08.231 [2024-07-25 09:10:04.000644] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:08.231 Received shutdown signal, test time was about 55.530447 seconds 00:28:08.231 00:28:08.231 Latency(us) 00:28:08.231 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:08.231 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:28:08.231 Verification LBA range: start 0x0 length 0x4000 00:28:08.231 Nvme0n1 : 55.53 5484.38 21.42 0.00 0.00 23309.76 1660.74 7046430.72 00:28:08.231 =================================================================================================================== 00:28:08.231 Total : 5484.38 21.42 0.00 0.00 23309.76 1660.74 7046430.72 00:28:08.231 09:10:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:08.489 09:10:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:28:08.489 09:10:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:28:08.489 09:10:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:28:08.489 09:10:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:08.489 09:10:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@117 -- # sync 00:28:08.489 09:10:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:08.489 09:10:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@120 -- # set +e 00:28:08.489 09:10:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:08.489 09:10:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:08.489 rmmod nvme_tcp 00:28:08.489 rmmod nvme_fabrics 00:28:08.489 rmmod nvme_keyring 00:28:08.489 09:10:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:08.489 09:10:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set -e 00:28:08.489 09:10:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # return 0 00:28:08.489 09:10:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@489 -- # '[' -n 86708 ']' 00:28:08.489 09:10:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@490 -- # killprocess 86708 00:28:08.489 09:10:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@950 -- # '[' -z 86708 ']' 00:28:08.489 09:10:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # kill -0 86708 00:28:08.489 09:10:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # uname 00:28:08.489 09:10:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:08.489 09:10:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86708 00:28:08.489 09:10:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:08.489 09:10:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:08.489 killing process with pid 86708 00:28:08.489 09:10:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86708' 00:28:08.489 09:10:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@969 -- # kill 86708 00:28:08.489 09:10:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@974 -- # wait 86708 00:28:09.865 09:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:09.865 09:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:09.865 09:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:09.865 09:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:09.865 09:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:09.865 09:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:09.865 09:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:09.865 09:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:09.865 09:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:28:09.865 00:28:09.865 real 1m3.609s 00:28:09.865 user 2m57.211s 00:28:09.865 sys 0m16.222s 00:28:09.865 09:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:09.865 09:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:28:09.865 ************************************ 00:28:09.865 END TEST nvmf_host_multipath 00:28:09.865 ************************************ 00:28:10.124 09:10:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:28:10.124 09:10:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:10.124 09:10:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:10.124 09:10:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.124 ************************************ 00:28:10.124 START TEST nvmf_timeout 00:28:10.124 ************************************ 00:28:10.124 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:28:10.124 * Looking for test storage... 00:28:10.124 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:28:10.124 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:10.124 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:28:10.124 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:10.124 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:10.124 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:10.124 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:10.124 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:10.124 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:10.124 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:10.124 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:10.124 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:10.124 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:10.124 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:28:10.124 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:28:10.124 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:10.124 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:10.124 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:10.124 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:10.124 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:10.124 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:10.124 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:10.124 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:10.124 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:10.124 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:10.125 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:10.125 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:28:10.125 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:10.125 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@47 -- # : 0 00:28:10.125 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:10.125 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:10.125 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:10.125 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:10.125 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:10.125 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:10.125 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:10.125 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:10.125 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:10.125 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:10.125 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:10.125 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:28:10.125 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:10.125 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:28:10.125 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:10.125 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:10.125 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:10.125 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:10.125 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:10.125 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:10.125 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:10.125 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:10.125 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:28:10.125 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:28:10.125 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:28:10.125 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:28:10.125 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:28:10.125 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:28:10.125 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:10.125 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:10.125 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:28:10.125 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:28:10.125 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:10.125 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:10.125 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:10.125 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:10.125 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:10.125 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:10.125 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:10.125 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:10.125 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:28:10.125 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:28:10.125 Cannot find device "nvmf_tgt_br" 00:28:10.125 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # true 00:28:10.125 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:28:10.125 Cannot find device "nvmf_tgt_br2" 00:28:10.125 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # true 00:28:10.125 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:28:10.125 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:28:10.125 Cannot find device "nvmf_tgt_br" 00:28:10.125 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # true 00:28:10.125 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:28:10.125 Cannot find device "nvmf_tgt_br2" 00:28:10.125 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # true 00:28:10.125 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:28:10.125 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:28:10.421 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:10.421 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:10.421 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:28:10.421 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:10.421 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:10.421 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:28:10.421 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:28:10.421 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:10.421 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:10.421 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:10.421 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:10.421 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:10.421 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:10.421 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:28:10.421 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:28:10.421 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:28:10.421 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:28:10.421 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:28:10.422 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:28:10.422 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:10.422 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:10.422 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:10.422 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:28:10.422 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:28:10.422 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:28:10.422 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:10.422 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:10.422 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:10.422 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:10.422 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:28:10.422 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:10.422 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:28:10.422 00:28:10.422 --- 10.0.0.2 ping statistics --- 00:28:10.422 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:10.422 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:28:10.422 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:28:10.422 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:10.422 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:28:10.422 00:28:10.422 --- 10.0.0.3 ping statistics --- 00:28:10.422 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:10.422 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:28:10.422 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:10.422 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:10.422 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:28:10.422 00:28:10.422 --- 10.0.0.1 ping statistics --- 00:28:10.422 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:10.422 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:28:10.422 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:10.422 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@433 -- # return 0 00:28:10.422 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:10.422 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:10.422 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:10.422 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:10.422 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:10.422 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:10.422 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:10.422 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:28:10.422 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:10.422 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:10.422 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:10.422 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@481 -- # nvmfpid=87875 00:28:10.422 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:28:10.422 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # waitforlisten 87875 00:28:10.422 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 87875 ']' 00:28:10.422 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:10.422 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:10.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:10.422 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:10.422 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:10.422 09:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:10.680 [2024-07-25 09:10:17.574998] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:10.680 [2024-07-25 09:10:17.575174] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:10.680 [2024-07-25 09:10:17.743039] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:10.937 [2024-07-25 09:10:17.980453] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:10.937 [2024-07-25 09:10:17.980518] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:10.937 [2024-07-25 09:10:17.980537] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:10.937 [2024-07-25 09:10:17.980552] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:10.937 [2024-07-25 09:10:17.980564] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:10.937 [2024-07-25 09:10:17.980789] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:10.937 [2024-07-25 09:10:17.980972] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:11.194 [2024-07-25 09:10:18.193723] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:28:11.452 09:10:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:11.452 09:10:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:28:11.452 09:10:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:11.452 09:10:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:11.452 09:10:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:11.711 09:10:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:11.711 09:10:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:11.711 09:10:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:11.969 [2024-07-25 09:10:18.830128] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:11.969 09:10:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:28:12.227 Malloc0 00:28:12.227 09:10:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:12.485 09:10:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:12.743 09:10:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:12.743 [2024-07-25 09:10:19.851140] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:13.001 09:10:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=87931 00:28:13.001 09:10:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 87931 /var/tmp/bdevperf.sock 00:28:13.001 09:10:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:28:13.001 09:10:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 87931 ']' 00:28:13.001 09:10:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:13.001 09:10:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:13.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:13.001 09:10:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:13.001 09:10:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:13.001 09:10:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:13.001 [2024-07-25 09:10:19.956498] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:13.001 [2024-07-25 09:10:19.956655] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87931 ] 00:28:13.258 [2024-07-25 09:10:20.124501] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:13.258 [2024-07-25 09:10:20.362206] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:13.516 [2024-07-25 09:10:20.565270] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:28:13.774 09:10:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:13.774 09:10:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:28:13.774 09:10:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:28:14.031 09:10:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:28:14.288 NVMe0n1 00:28:14.547 09:10:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=87950 00:28:14.547 09:10:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:14.547 09:10:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:28:14.547 Running I/O for 10 seconds... 00:28:15.479 09:10:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:15.738 [2024-07-25 09:10:22.671298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:51360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.738 [2024-07-25 09:10:22.671401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.738 [2024-07-25 09:10:22.671440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:51488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.738 [2024-07-25 09:10:22.671461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.738 [2024-07-25 09:10:22.671480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:51496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.738 [2024-07-25 09:10:22.671497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.738 [2024-07-25 09:10:22.671515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:51504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.738 [2024-07-25 09:10:22.671532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.738 [2024-07-25 09:10:22.671549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:51512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.738 [2024-07-25 09:10:22.671565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.738 [2024-07-25 09:10:22.671582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:51520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.738 [2024-07-25 09:10:22.671603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.739 [2024-07-25 09:10:22.671619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:51528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.739 [2024-07-25 09:10:22.671637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.739 [2024-07-25 09:10:22.671653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:51536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.739 [2024-07-25 09:10:22.671670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.739 [2024-07-25 09:10:22.671685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:51544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.739 [2024-07-25 09:10:22.671702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.739 [2024-07-25 09:10:22.671720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:51552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.739 [2024-07-25 09:10:22.671737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.739 [2024-07-25 09:10:22.671753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:51560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.739 [2024-07-25 09:10:22.671770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.739 [2024-07-25 09:10:22.671786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:51568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.739 [2024-07-25 09:10:22.671803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.739 [2024-07-25 09:10:22.671833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:51576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.739 [2024-07-25 09:10:22.671875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.739 [2024-07-25 09:10:22.671894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:51584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.739 [2024-07-25 09:10:22.671915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.739 [2024-07-25 09:10:22.671932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:51592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.739 [2024-07-25 09:10:22.671949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.739 [2024-07-25 09:10:22.671965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:51600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.739 [2024-07-25 09:10:22.671982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.739 [2024-07-25 09:10:22.671999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:51608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.739 [2024-07-25 09:10:22.672016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.739 [2024-07-25 09:10:22.672033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:51616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.739 [2024-07-25 09:10:22.672050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.739 [2024-07-25 09:10:22.672066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:51624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.739 [2024-07-25 09:10:22.672083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.739 [2024-07-25 09:10:22.672099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:51632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.739 [2024-07-25 09:10:22.672116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.739 [2024-07-25 09:10:22.672131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:51640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.739 [2024-07-25 09:10:22.672148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.739 [2024-07-25 09:10:22.672164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:51648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.739 [2024-07-25 09:10:22.672192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.739 [2024-07-25 09:10:22.672208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:51656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.739 [2024-07-25 09:10:22.672225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.739 [2024-07-25 09:10:22.672241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:51664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.739 [2024-07-25 09:10:22.672258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.739 [2024-07-25 09:10:22.672275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:51672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.739 [2024-07-25 09:10:22.672292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.739 [2024-07-25 09:10:22.672308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:51680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.739 [2024-07-25 09:10:22.672324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.739 [2024-07-25 09:10:22.672354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:51688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.739 [2024-07-25 09:10:22.672370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.739 [2024-07-25 09:10:22.672387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:51696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.739 [2024-07-25 09:10:22.672405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.739 [2024-07-25 09:10:22.672421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:51704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.739 [2024-07-25 09:10:22.672438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.739 [2024-07-25 09:10:22.672454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:51712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.739 [2024-07-25 09:10:22.672473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.739 [2024-07-25 09:10:22.672490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:51720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.739 [2024-07-25 09:10:22.672507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.739 [2024-07-25 09:10:22.672523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:51728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.739 [2024-07-25 09:10:22.672540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.739 [2024-07-25 09:10:22.672556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:51736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.739 [2024-07-25 09:10:22.672572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.739 [2024-07-25 09:10:22.672589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:51744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.739 [2024-07-25 09:10:22.672605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.739 [2024-07-25 09:10:22.672621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:51752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.739 [2024-07-25 09:10:22.672638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.739 [2024-07-25 09:10:22.672654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:51760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.739 [2024-07-25 09:10:22.672671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.739 [2024-07-25 09:10:22.672687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:51768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.739 [2024-07-25 09:10:22.672704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.739 [2024-07-25 09:10:22.672721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:51776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.739 [2024-07-25 09:10:22.672740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.739 [2024-07-25 09:10:22.672757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:51784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.739 [2024-07-25 09:10:22.672773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.739 [2024-07-25 09:10:22.672790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:51792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.739 [2024-07-25 09:10:22.672806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.740 [2024-07-25 09:10:22.672833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:51800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.740 [2024-07-25 09:10:22.672854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.740 [2024-07-25 09:10:22.672871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:51808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.740 [2024-07-25 09:10:22.672889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.740 [2024-07-25 09:10:22.672905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:51816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.740 [2024-07-25 09:10:22.672922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.740 [2024-07-25 09:10:22.672938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:51824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.740 [2024-07-25 09:10:22.672956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.740 [2024-07-25 09:10:22.672972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:51832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.740 [2024-07-25 09:10:22.672989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.740 [2024-07-25 09:10:22.673005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:51840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.740 [2024-07-25 09:10:22.673032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.740 [2024-07-25 09:10:22.673054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:51848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.740 [2024-07-25 09:10:22.673070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.740 [2024-07-25 09:10:22.673086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:51856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.740 [2024-07-25 09:10:22.673103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.740 [2024-07-25 09:10:22.673120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:51864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.740 [2024-07-25 09:10:22.673152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.740 [2024-07-25 09:10:22.673171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:51872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.740 [2024-07-25 09:10:22.673188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.740 [2024-07-25 09:10:22.673205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:51880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.740 [2024-07-25 09:10:22.673222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.740 [2024-07-25 09:10:22.673238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:51888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.740 [2024-07-25 09:10:22.673255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.740 [2024-07-25 09:10:22.673271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:51896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.740 [2024-07-25 09:10:22.673288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.740 [2024-07-25 09:10:22.673304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:51904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.740 [2024-07-25 09:10:22.673325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.740 [2024-07-25 09:10:22.673342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:51912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.740 [2024-07-25 09:10:22.673364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.740 [2024-07-25 09:10:22.673380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:51920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.740 [2024-07-25 09:10:22.673396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.740 [2024-07-25 09:10:22.673412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:51928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.740 [2024-07-25 09:10:22.673429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.740 [2024-07-25 09:10:22.673444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:51936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.740 [2024-07-25 09:10:22.673460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.740 [2024-07-25 09:10:22.673476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:51944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.740 [2024-07-25 09:10:22.673493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.740 [2024-07-25 09:10:22.673509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:51952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.740 [2024-07-25 09:10:22.673527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.740 [2024-07-25 09:10:22.673543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:51960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.740 [2024-07-25 09:10:22.673560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.740 [2024-07-25 09:10:22.673576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:51968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.740 [2024-07-25 09:10:22.673596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.740 [2024-07-25 09:10:22.673612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:51976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.740 [2024-07-25 09:10:22.673629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.740 [2024-07-25 09:10:22.673645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:51984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.740 [2024-07-25 09:10:22.673662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.740 [2024-07-25 09:10:22.673678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:51992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.740 [2024-07-25 09:10:22.673695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.740 [2024-07-25 09:10:22.673717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:52000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.740 [2024-07-25 09:10:22.673733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.740 [2024-07-25 09:10:22.673749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:52008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.740 [2024-07-25 09:10:22.673768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.740 [2024-07-25 09:10:22.673785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:52016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.740 [2024-07-25 09:10:22.673801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.740 [2024-07-25 09:10:22.673829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:52024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.740 [2024-07-25 09:10:22.673849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.740 [2024-07-25 09:10:22.673866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:52032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.740 [2024-07-25 09:10:22.673885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.740 [2024-07-25 09:10:22.673901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:52040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.740 [2024-07-25 09:10:22.673922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.740 [2024-07-25 09:10:22.673938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:52048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.740 [2024-07-25 09:10:22.673955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.740 [2024-07-25 09:10:22.673971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:52056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.740 [2024-07-25 09:10:22.673988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.740 [2024-07-25 09:10:22.674004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:52064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.740 [2024-07-25 09:10:22.674027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.740 [2024-07-25 09:10:22.674043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:52072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.740 [2024-07-25 09:10:22.674060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.740 [2024-07-25 09:10:22.674080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:52080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.740 [2024-07-25 09:10:22.674106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.741 [2024-07-25 09:10:22.674122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:52088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.741 [2024-07-25 09:10:22.674139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.741 [2024-07-25 09:10:22.674156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:52096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.741 [2024-07-25 09:10:22.674175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.741 [2024-07-25 09:10:22.674192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:52104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.741 [2024-07-25 09:10:22.674210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.741 [2024-07-25 09:10:22.674227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:52112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.741 [2024-07-25 09:10:22.674245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.741 [2024-07-25 09:10:22.674262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:52120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.741 [2024-07-25 09:10:22.674279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.741 [2024-07-25 09:10:22.674295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:52128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.741 [2024-07-25 09:10:22.674312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.741 [2024-07-25 09:10:22.674328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:52136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.741 [2024-07-25 09:10:22.674344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.741 [2024-07-25 09:10:22.674360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:52144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.741 [2024-07-25 09:10:22.674383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.741 [2024-07-25 09:10:22.674417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:52152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.741 [2024-07-25 09:10:22.674443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.741 [2024-07-25 09:10:22.674461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:52160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.741 [2024-07-25 09:10:22.674481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.741 [2024-07-25 09:10:22.674498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:52168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.741 [2024-07-25 09:10:22.674515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.741 [2024-07-25 09:10:22.674531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:52176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.741 [2024-07-25 09:10:22.674547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.741 [2024-07-25 09:10:22.674563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:52184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.741 [2024-07-25 09:10:22.674580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.741 [2024-07-25 09:10:22.674596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:52192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.741 [2024-07-25 09:10:22.674612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.741 [2024-07-25 09:10:22.674629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:52200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.741 [2024-07-25 09:10:22.674646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.741 [2024-07-25 09:10:22.674662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:52208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.741 [2024-07-25 09:10:22.674683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.741 [2024-07-25 09:10:22.674700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:52216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.741 [2024-07-25 09:10:22.674717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.741 [2024-07-25 09:10:22.674733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:52224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.741 [2024-07-25 09:10:22.674754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.741 [2024-07-25 09:10:22.674771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:52232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.741 [2024-07-25 09:10:22.674788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.741 [2024-07-25 09:10:22.674804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:52240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.741 [2024-07-25 09:10:22.674835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.741 [2024-07-25 09:10:22.674853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:52248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.741 [2024-07-25 09:10:22.674871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.741 [2024-07-25 09:10:22.674887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:52256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.741 [2024-07-25 09:10:22.674904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.741 [2024-07-25 09:10:22.674920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:52264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.741 [2024-07-25 09:10:22.674936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.741 [2024-07-25 09:10:22.674952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:52272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.741 [2024-07-25 09:10:22.674969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.741 [2024-07-25 09:10:22.674986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:52280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.741 [2024-07-25 09:10:22.675002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.741 [2024-07-25 09:10:22.675019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:52288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.741 [2024-07-25 09:10:22.675037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.741 [2024-07-25 09:10:22.675054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:52296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.741 [2024-07-25 09:10:22.675070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.741 [2024-07-25 09:10:22.675086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:52304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.741 [2024-07-25 09:10:22.675103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.741 [2024-07-25 09:10:22.675119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:52312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.741 [2024-07-25 09:10:22.675136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.741 [2024-07-25 09:10:22.675152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:52320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.741 [2024-07-25 09:10:22.675169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.741 [2024-07-25 09:10:22.675185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:52328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.741 [2024-07-25 09:10:22.675203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.741 [2024-07-25 09:10:22.675220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:52336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.741 [2024-07-25 09:10:22.675241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.741 [2024-07-25 09:10:22.675258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:52344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.741 [2024-07-25 09:10:22.675274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.742 [2024-07-25 09:10:22.675291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:52352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.742 [2024-07-25 09:10:22.675310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.742 [2024-07-25 09:10:22.675342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:52360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.742 [2024-07-25 09:10:22.675373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.742 [2024-07-25 09:10:22.675391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:51368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.742 [2024-07-25 09:10:22.675412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.742 [2024-07-25 09:10:22.675430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:51376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.742 [2024-07-25 09:10:22.675460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.742 [2024-07-25 09:10:22.675477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:51384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.742 [2024-07-25 09:10:22.675494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.742 [2024-07-25 09:10:22.675511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:51392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.742 [2024-07-25 09:10:22.675528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.742 [2024-07-25 09:10:22.675544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:51400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.742 [2024-07-25 09:10:22.675561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.742 [2024-07-25 09:10:22.675577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:51408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.742 [2024-07-25 09:10:22.675594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.742 [2024-07-25 09:10:22.675617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:51416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.742 [2024-07-25 09:10:22.675636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.742 [2024-07-25 09:10:22.675652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.742 [2024-07-25 09:10:22.675671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.742 [2024-07-25 09:10:22.675688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:51432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.742 [2024-07-25 09:10:22.675704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.742 [2024-07-25 09:10:22.675720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:51440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.742 [2024-07-25 09:10:22.675736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.742 [2024-07-25 09:10:22.675752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:51448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.742 [2024-07-25 09:10:22.675768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.742 [2024-07-25 09:10:22.675784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:51456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.742 [2024-07-25 09:10:22.675801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.742 [2024-07-25 09:10:22.675829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:51464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.742 [2024-07-25 09:10:22.675874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.742 [2024-07-25 09:10:22.675893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:51472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.742 [2024-07-25 09:10:22.675917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.742 [2024-07-25 09:10:22.675933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:51480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.742 [2024-07-25 09:10:22.675952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.742 [2024-07-25 09:10:22.675976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:52368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.742 [2024-07-25 09:10:22.675998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.742 [2024-07-25 09:10:22.676016] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b000 is same with the state(5) to be set 00:28:15.742 [2024-07-25 09:10:22.676043] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:15.742 [2024-07-25 09:10:22.676056] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:15.742 [2024-07-25 09:10:22.676073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52376 len:8 PRP1 0x0 PRP2 0x0 00:28:15.742 [2024-07-25 09:10:22.676088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.742 [2024-07-25 09:10:22.676352] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002b000 was disconnected and freed. reset controller. 00:28:15.742 [2024-07-25 09:10:22.676481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.742 [2024-07-25 09:10:22.676521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.742 [2024-07-25 09:10:22.676548] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.742 [2024-07-25 09:10:22.676562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.742 [2024-07-25 09:10:22.676579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.742 [2024-07-25 09:10:22.676592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.742 [2024-07-25 09:10:22.676612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.742 [2024-07-25 09:10:22.676626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.742 [2024-07-25 09:10:22.676641] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:28:15.742 [2024-07-25 09:10:22.676921] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:15.742 [2024-07-25 09:10:22.676974] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:28:15.742 [2024-07-25 09:10:22.677105] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.742 [2024-07-25 09:10:22.677136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:28:15.742 [2024-07-25 09:10:22.677157] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:28:15.742 [2024-07-25 09:10:22.677186] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:28:15.742 [2024-07-25 09:10:22.677220] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:15.742 [2024-07-25 09:10:22.677235] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:15.742 [2024-07-25 09:10:22.677253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:15.742 [2024-07-25 09:10:22.677286] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:15.742 [2024-07-25 09:10:22.677312] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:15.742 09:10:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:28:17.640 [2024-07-25 09:10:24.677573] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.640 [2024-07-25 09:10:24.677645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:28:17.640 [2024-07-25 09:10:24.677675] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:28:17.640 [2024-07-25 09:10:24.677714] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:28:17.640 [2024-07-25 09:10:24.677763] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:17.640 [2024-07-25 09:10:24.677781] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:17.640 [2024-07-25 09:10:24.677803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:17.640 [2024-07-25 09:10:24.677865] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:17.640 [2024-07-25 09:10:24.677889] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:17.640 09:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:28:17.640 09:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:17.640 09:10:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:28:17.898 09:10:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:28:17.898 09:10:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:28:17.898 09:10:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:28:17.898 09:10:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:28:18.156 09:10:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:28:18.156 09:10:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:28:20.056 [2024-07-25 09:10:26.678094] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.056 [2024-07-25 09:10:26.678164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:28:20.056 [2024-07-25 09:10:26.678195] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:28:20.056 [2024-07-25 09:10:26.678235] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:28:20.056 [2024-07-25 09:10:26.678267] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:20.056 [2024-07-25 09:10:26.678283] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:20.056 [2024-07-25 09:10:26.678306] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:20.056 [2024-07-25 09:10:26.678348] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.056 [2024-07-25 09:10:26.678372] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:21.957 [2024-07-25 09:10:28.678505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:21.957 [2024-07-25 09:10:28.678646] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:21.957 [2024-07-25 09:10:28.678669] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:21.957 [2024-07-25 09:10:28.678689] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:28:21.957 [2024-07-25 09:10:28.678735] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.890 00:28:22.890 Latency(us) 00:28:22.890 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:22.890 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:28:22.890 Verification LBA range: start 0x0 length 0x4000 00:28:22.890 NVMe0n1 : 8.16 786.63 3.07 15.68 0.00 159270.30 4676.89 7015926.69 00:28:22.890 =================================================================================================================== 00:28:22.890 Total : 786.63 3.07 15.68 0.00 159270.30 4676.89 7015926.69 00:28:22.890 0 00:28:23.148 09:10:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:28:23.148 09:10:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:23.148 09:10:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:28:23.714 09:10:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:28:23.714 09:10:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:28:23.714 09:10:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:28:23.714 09:10:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:28:23.714 09:10:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:28:23.714 09:10:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 87950 00:28:23.714 09:10:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 87931 00:28:23.714 09:10:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 87931 ']' 00:28:23.714 09:10:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 87931 00:28:23.714 09:10:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:28:23.714 09:10:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:23.714 09:10:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87931 00:28:23.714 killing process with pid 87931 00:28:23.714 Received shutdown signal, test time was about 9.308139 seconds 00:28:23.714 00:28:23.714 Latency(us) 00:28:23.714 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:23.714 =================================================================================================================== 00:28:23.714 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:23.714 09:10:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:28:23.714 09:10:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:28:23.714 09:10:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87931' 00:28:23.714 09:10:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 87931 00:28:23.714 09:10:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 87931 00:28:25.143 09:10:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:25.143 [2024-07-25 09:10:32.245873] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:25.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:25.401 09:10:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=88080 00:28:25.401 09:10:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:28:25.401 09:10:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 88080 /var/tmp/bdevperf.sock 00:28:25.401 09:10:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 88080 ']' 00:28:25.401 09:10:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:25.401 09:10:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:25.401 09:10:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:25.401 09:10:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:25.401 09:10:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:25.401 [2024-07-25 09:10:32.387963] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:25.401 [2024-07-25 09:10:32.388402] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88080 ] 00:28:25.659 [2024-07-25 09:10:32.565786] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:25.918 [2024-07-25 09:10:32.804677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:25.918 [2024-07-25 09:10:33.009680] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:28:26.176 09:10:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:26.176 09:10:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:28:26.176 09:10:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:28:26.433 09:10:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:28:26.998 NVMe0n1 00:28:26.998 09:10:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=88104 00:28:26.998 09:10:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:26.998 09:10:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:28:26.998 Running I/O for 10 seconds... 00:28:27.932 09:10:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:28.193 [2024-07-25 09:10:35.152896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:49304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.193 [2024-07-25 09:10:35.152968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.193 [2024-07-25 09:10:35.153027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:48416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.193 [2024-07-25 09:10:35.153045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.193 [2024-07-25 09:10:35.153065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:48424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.193 [2024-07-25 09:10:35.153080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.193 [2024-07-25 09:10:35.153102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:48432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.193 [2024-07-25 09:10:35.153117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.193 [2024-07-25 09:10:35.153135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:48440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.193 [2024-07-25 09:10:35.153149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.193 [2024-07-25 09:10:35.153167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:48448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.193 [2024-07-25 09:10:35.153181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.193 [2024-07-25 09:10:35.153199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:48456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.193 [2024-07-25 09:10:35.153213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.193 [2024-07-25 09:10:35.153232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:48464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.193 [2024-07-25 09:10:35.153245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.193 [2024-07-25 09:10:35.153265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:49312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.193 [2024-07-25 09:10:35.153279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.193 [2024-07-25 09:10:35.153298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:48472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.193 [2024-07-25 09:10:35.153311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.193 [2024-07-25 09:10:35.153330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:48480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.193 [2024-07-25 09:10:35.153344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.193 [2024-07-25 09:10:35.153367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:48488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.193 [2024-07-25 09:10:35.153382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.193 [2024-07-25 09:10:35.153400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:48496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.193 [2024-07-25 09:10:35.153414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.193 [2024-07-25 09:10:35.153444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:48504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.193 [2024-07-25 09:10:35.153459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.194 [2024-07-25 09:10:35.153477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:48512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.194 [2024-07-25 09:10:35.153490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.194 [2024-07-25 09:10:35.153508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:48520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.194 [2024-07-25 09:10:35.153522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.194 [2024-07-25 09:10:35.153541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:48528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.194 [2024-07-25 09:10:35.153554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.194 [2024-07-25 09:10:35.153576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:48536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.194 [2024-07-25 09:10:35.153590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.194 [2024-07-25 09:10:35.153608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:48544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.194 [2024-07-25 09:10:35.153622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.194 [2024-07-25 09:10:35.153643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:48552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.194 [2024-07-25 09:10:35.153656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.194 [2024-07-25 09:10:35.153675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:48560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.194 [2024-07-25 09:10:35.153688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.194 [2024-07-25 09:10:35.153706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:48568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.194 [2024-07-25 09:10:35.153720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.194 [2024-07-25 09:10:35.153738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:48576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.194 [2024-07-25 09:10:35.153752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.194 [2024-07-25 09:10:35.153770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:48584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.194 [2024-07-25 09:10:35.153784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.194 [2024-07-25 09:10:35.153804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:48592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.194 [2024-07-25 09:10:35.153818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.194 [2024-07-25 09:10:35.153849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:48600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.194 [2024-07-25 09:10:35.153866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.194 [2024-07-25 09:10:35.153908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:48608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.194 [2024-07-25 09:10:35.153922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.194 [2024-07-25 09:10:35.153943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:48616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.194 [2024-07-25 09:10:35.153957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.194 [2024-07-25 09:10:35.153976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:48624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.194 [2024-07-25 09:10:35.153989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.194 [2024-07-25 09:10:35.154008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:48632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.194 [2024-07-25 09:10:35.154067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.194 [2024-07-25 09:10:35.154087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:48640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.194 [2024-07-25 09:10:35.154101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.194 [2024-07-25 09:10:35.154120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:48648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.194 [2024-07-25 09:10:35.154134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.194 [2024-07-25 09:10:35.154153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:48656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.194 [2024-07-25 09:10:35.154167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.194 [2024-07-25 09:10:35.154186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:48664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.194 [2024-07-25 09:10:35.154200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.194 [2024-07-25 09:10:35.154218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:48672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.194 [2024-07-25 09:10:35.154232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.194 [2024-07-25 09:10:35.154253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:48680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.194 [2024-07-25 09:10:35.154267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.194 [2024-07-25 09:10:35.154285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:48688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.194 [2024-07-25 09:10:35.154298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.194 [2024-07-25 09:10:35.154319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:48696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.194 [2024-07-25 09:10:35.154333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.194 [2024-07-25 09:10:35.154351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:48704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.194 [2024-07-25 09:10:35.154365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.194 [2024-07-25 09:10:35.154383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:48712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.194 [2024-07-25 09:10:35.154397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.194 [2024-07-25 09:10:35.154415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:48720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.194 [2024-07-25 09:10:35.154429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.194 [2024-07-25 09:10:35.154447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:48728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.194 [2024-07-25 09:10:35.154460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.194 [2024-07-25 09:10:35.154480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:48736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.194 [2024-07-25 09:10:35.154494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.194 [2024-07-25 09:10:35.154515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:48744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.194 [2024-07-25 09:10:35.154529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.194 [2024-07-25 09:10:35.154551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:48752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.194 [2024-07-25 09:10:35.154565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.194 [2024-07-25 09:10:35.154583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:48760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.194 [2024-07-25 09:10:35.154597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.194 [2024-07-25 09:10:35.154616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:48768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.194 [2024-07-25 09:10:35.154631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.194 [2024-07-25 09:10:35.154650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:48776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.194 [2024-07-25 09:10:35.154664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.195 [2024-07-25 09:10:35.154682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:48784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.195 [2024-07-25 09:10:35.154710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.195 [2024-07-25 09:10:35.154730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:48792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.195 [2024-07-25 09:10:35.154744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.195 [2024-07-25 09:10:35.154764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:48800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.195 [2024-07-25 09:10:35.154778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.195 [2024-07-25 09:10:35.154800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:48808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.195 [2024-07-25 09:10:35.154824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.195 [2024-07-25 09:10:35.154846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:48816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.195 [2024-07-25 09:10:35.154860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.195 [2024-07-25 09:10:35.154879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:48824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.195 [2024-07-25 09:10:35.154892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.195 [2024-07-25 09:10:35.154911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:48832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.195 [2024-07-25 09:10:35.154924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.195 [2024-07-25 09:10:35.154960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:48840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.195 [2024-07-25 09:10:35.154974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.195 [2024-07-25 09:10:35.154992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:48848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.195 [2024-07-25 09:10:35.155006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.195 [2024-07-25 09:10:35.155024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:48856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.195 [2024-07-25 09:10:35.155038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.195 [2024-07-25 09:10:35.155057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:48864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.195 [2024-07-25 09:10:35.155071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.195 [2024-07-25 09:10:35.155091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:48872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.195 [2024-07-25 09:10:35.155105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.195 [2024-07-25 09:10:35.155123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:48880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.195 [2024-07-25 09:10:35.155137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.195 [2024-07-25 09:10:35.155155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:48888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.195 [2024-07-25 09:10:35.155168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.195 [2024-07-25 09:10:35.155187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:48896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.195 [2024-07-25 09:10:35.155201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.195 [2024-07-25 09:10:35.155221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:48904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.195 [2024-07-25 09:10:35.155236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.195 [2024-07-25 09:10:35.155254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:48912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.195 [2024-07-25 09:10:35.155268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.195 [2024-07-25 09:10:35.155286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:48920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.195 [2024-07-25 09:10:35.155301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.195 [2024-07-25 09:10:35.155319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:48928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.195 [2024-07-25 09:10:35.155333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.195 [2024-07-25 09:10:35.155354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:48936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.195 [2024-07-25 09:10:35.155368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.195 [2024-07-25 09:10:35.155386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:48944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.195 [2024-07-25 09:10:35.155399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.195 [2024-07-25 09:10:35.155418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:48952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.195 [2024-07-25 09:10:35.155432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.195 [2024-07-25 09:10:35.155450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:48960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.195 [2024-07-25 09:10:35.155464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.195 [2024-07-25 09:10:35.155483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:48968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.195 [2024-07-25 09:10:35.155496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.195 [2024-07-25 09:10:35.155515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:48976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.195 [2024-07-25 09:10:35.155529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.195 [2024-07-25 09:10:35.155547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:48984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.195 [2024-07-25 09:10:35.155561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.195 [2024-07-25 09:10:35.155580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:48992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.195 [2024-07-25 09:10:35.155593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.195 [2024-07-25 09:10:35.155614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:49000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.195 [2024-07-25 09:10:35.155628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.195 [2024-07-25 09:10:35.155648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:49008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.196 [2024-07-25 09:10:35.155662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.196 [2024-07-25 09:10:35.155681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:49016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.196 [2024-07-25 09:10:35.155695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.196 [2024-07-25 09:10:35.155714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:49024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.196 [2024-07-25 09:10:35.155729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.196 [2024-07-25 09:10:35.155748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:49032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.196 [2024-07-25 09:10:35.155762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.196 [2024-07-25 09:10:35.155781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:49040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.196 [2024-07-25 09:10:35.155795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.196 [2024-07-25 09:10:35.155823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:49048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.196 [2024-07-25 09:10:35.155849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.196 [2024-07-25 09:10:35.155871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:49056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.196 [2024-07-25 09:10:35.155887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.196 [2024-07-25 09:10:35.155908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:49064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.196 [2024-07-25 09:10:35.155922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.196 [2024-07-25 09:10:35.155940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:49072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.196 [2024-07-25 09:10:35.155954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.196 [2024-07-25 09:10:35.155973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:49080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.196 [2024-07-25 09:10:35.155986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.196 [2024-07-25 09:10:35.156005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:49088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.196 [2024-07-25 09:10:35.156018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.196 [2024-07-25 09:10:35.156043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:49096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.196 [2024-07-25 09:10:35.156057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.196 [2024-07-25 09:10:35.156075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:49104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.196 [2024-07-25 09:10:35.156089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.196 [2024-07-25 09:10:35.156108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:49112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.196 [2024-07-25 09:10:35.156122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.196 [2024-07-25 09:10:35.156142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:49120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.196 [2024-07-25 09:10:35.156156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.196 [2024-07-25 09:10:35.156177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:49128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.196 [2024-07-25 09:10:35.156191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.196 [2024-07-25 09:10:35.156209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:49136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.196 [2024-07-25 09:10:35.156223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.196 [2024-07-25 09:10:35.156249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:49144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.196 [2024-07-25 09:10:35.156263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.196 [2024-07-25 09:10:35.156281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:49152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.196 [2024-07-25 09:10:35.156295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.196 [2024-07-25 09:10:35.156314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:49160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.196 [2024-07-25 09:10:35.156328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.196 [2024-07-25 09:10:35.156347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:49168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.196 [2024-07-25 09:10:35.156361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.196 [2024-07-25 09:10:35.156379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:49176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.196 [2024-07-25 09:10:35.156394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.196 [2024-07-25 09:10:35.156412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:49184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.196 [2024-07-25 09:10:35.156426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.196 [2024-07-25 09:10:35.156447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:49192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.196 [2024-07-25 09:10:35.156461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.196 [2024-07-25 09:10:35.156479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:49200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.196 [2024-07-25 09:10:35.156493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.196 [2024-07-25 09:10:35.156511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:49208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.196 [2024-07-25 09:10:35.156525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.196 [2024-07-25 09:10:35.156543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:49216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.196 [2024-07-25 09:10:35.156557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.196 [2024-07-25 09:10:35.156577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:49224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.196 [2024-07-25 09:10:35.156591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.196 [2024-07-25 09:10:35.156610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:49232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.196 [2024-07-25 09:10:35.156623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.196 [2024-07-25 09:10:35.156650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:49240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.196 [2024-07-25 09:10:35.156665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.196 [2024-07-25 09:10:35.156683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:49248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.197 [2024-07-25 09:10:35.156697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.197 [2024-07-25 09:10:35.156717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:49256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.197 [2024-07-25 09:10:35.156731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.197 [2024-07-25 09:10:35.156749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:49264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.197 [2024-07-25 09:10:35.156763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.197 [2024-07-25 09:10:35.156786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:49272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.197 [2024-07-25 09:10:35.156801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.197 [2024-07-25 09:10:35.156830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:49280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.197 [2024-07-25 09:10:35.156847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.197 [2024-07-25 09:10:35.156867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:49288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.197 [2024-07-25 09:10:35.156881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.197 [2024-07-25 09:10:35.156899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:49296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.197 [2024-07-25 09:10:35.156923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.197 [2024-07-25 09:10:35.156943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:49320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.197 [2024-07-25 09:10:35.156957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.197 [2024-07-25 09:10:35.156976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:49328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.197 [2024-07-25 09:10:35.156990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.197 [2024-07-25 09:10:35.157014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:49336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.197 [2024-07-25 09:10:35.157027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.197 [2024-07-25 09:10:35.157046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:49344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.197 [2024-07-25 09:10:35.157060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.197 [2024-07-25 09:10:35.157078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:49352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.197 [2024-07-25 09:10:35.157092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.197 [2024-07-25 09:10:35.157110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:49360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.197 [2024-07-25 09:10:35.157124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.197 [2024-07-25 09:10:35.157142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:49368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.197 [2024-07-25 09:10:35.157156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.197 [2024-07-25 09:10:35.157174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:49376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.197 [2024-07-25 09:10:35.157188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.197 [2024-07-25 09:10:35.157212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:49384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.197 [2024-07-25 09:10:35.157225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.197 [2024-07-25 09:10:35.157243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:49392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.197 [2024-07-25 09:10:35.157256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.197 [2024-07-25 09:10:35.157277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:49400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.197 [2024-07-25 09:10:35.157291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.197 [2024-07-25 09:10:35.157309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:49408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.197 [2024-07-25 09:10:35.157323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.197 [2024-07-25 09:10:35.157346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:49416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.197 [2024-07-25 09:10:35.157361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.197 [2024-07-25 09:10:35.157379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:49424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.197 [2024-07-25 09:10:35.157392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.197 [2024-07-25 09:10:35.157410] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b000 is same with the state(5) to be set 00:28:28.197 [2024-07-25 09:10:35.157433] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:28.197 [2024-07-25 09:10:35.157449] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:28.197 [2024-07-25 09:10:35.157463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49432 len:8 PRP1 0x0 PRP2 0x0 00:28:28.197 [2024-07-25 09:10:35.157480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.197 [2024-07-25 09:10:35.157739] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002b000 was disconnected and freed. reset controller. 00:28:28.197 [2024-07-25 09:10:35.157889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.197 [2024-07-25 09:10:35.157922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.197 [2024-07-25 09:10:35.157942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.197 [2024-07-25 09:10:35.157958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.197 [2024-07-25 09:10:35.157973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.197 [2024-07-25 09:10:35.157989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.197 [2024-07-25 09:10:35.158003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.197 [2024-07-25 09:10:35.158019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.197 [2024-07-25 09:10:35.158031] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:28:28.197 [2024-07-25 09:10:35.158284] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.197 [2024-07-25 09:10:35.158322] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:28:28.198 [2024-07-25 09:10:35.158458] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.198 [2024-07-25 09:10:35.158493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:28:28.198 [2024-07-25 09:10:35.158510] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:28:28.198 [2024-07-25 09:10:35.158549] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:28:28.198 [2024-07-25 09:10:35.158581] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.198 [2024-07-25 09:10:35.158602] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.198 [2024-07-25 09:10:35.158619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.198 [2024-07-25 09:10:35.158655] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.198 [2024-07-25 09:10:35.158674] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.198 09:10:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:28:29.146 [2024-07-25 09:10:36.172623] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.146 [2024-07-25 09:10:36.172744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:28:29.146 [2024-07-25 09:10:36.172770] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:28:29.146 [2024-07-25 09:10:36.172816] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:28:29.146 [2024-07-25 09:10:36.172862] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:29.146 [2024-07-25 09:10:36.172885] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:29.146 [2024-07-25 09:10:36.172902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:29.146 [2024-07-25 09:10:36.172955] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:29.146 [2024-07-25 09:10:36.172981] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:29.146 09:10:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:29.404 [2024-07-25 09:10:36.396337] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:29.404 09:10:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 88104 00:28:30.339 [2024-07-25 09:10:37.192916] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:38.453 00:28:38.453 Latency(us) 00:28:38.453 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:38.453 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:28:38.453 Verification LBA range: start 0x0 length 0x4000 00:28:38.453 NVMe0n1 : 10.02 4850.10 18.95 0.00 0.00 26342.37 3008.70 3035150.89 00:28:38.453 =================================================================================================================== 00:28:38.453 Total : 4850.10 18.95 0.00 0.00 26342.37 3008.70 3035150.89 00:28:38.453 0 00:28:38.453 09:10:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=88210 00:28:38.453 09:10:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:38.453 09:10:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:28:38.453 Running I/O for 10 seconds... 00:28:38.454 09:10:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:38.454 [2024-07-25 09:10:45.324904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:51672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.454 [2024-07-25 09:10:45.324997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.454 [2024-07-25 09:10:45.325031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:51680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.454 [2024-07-25 09:10:45.325048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.454 [2024-07-25 09:10:45.325075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:51688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.454 [2024-07-25 09:10:45.325090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.454 [2024-07-25 09:10:45.325106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:51696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.454 [2024-07-25 09:10:45.325119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.454 [2024-07-25 09:10:45.325135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:51704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.454 [2024-07-25 09:10:45.325149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.454 [2024-07-25 09:10:45.325165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:51712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.454 [2024-07-25 09:10:45.325178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.454 [2024-07-25 09:10:45.325194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:51720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.454 [2024-07-25 09:10:45.325223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.454 [2024-07-25 09:10:45.325272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:51728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.454 [2024-07-25 09:10:45.325286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.454 [2024-07-25 09:10:45.325303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:51736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.454 [2024-07-25 09:10:45.325316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.454 [2024-07-25 09:10:45.325332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:51744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.454 [2024-07-25 09:10:45.325355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.454 [2024-07-25 09:10:45.325371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:51752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.454 [2024-07-25 09:10:45.325385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.454 [2024-07-25 09:10:45.325400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:51760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.454 [2024-07-25 09:10:45.325423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.454 [2024-07-25 09:10:45.325439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:51768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.454 [2024-07-25 09:10:45.325453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.454 [2024-07-25 09:10:45.325500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:51776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.454 [2024-07-25 09:10:45.325530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.454 [2024-07-25 09:10:45.325546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:51784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.454 [2024-07-25 09:10:45.325560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.454 [2024-07-25 09:10:45.325576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:51792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.454 [2024-07-25 09:10:45.325589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.454 [2024-07-25 09:10:45.325620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:51800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.454 [2024-07-25 09:10:45.325633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.454 [2024-07-25 09:10:45.325651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:51808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.454 [2024-07-25 09:10:45.325665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.454 [2024-07-25 09:10:45.325681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:51816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.454 [2024-07-25 09:10:45.325695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.454 [2024-07-25 09:10:45.325727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:51824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.454 [2024-07-25 09:10:45.325740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.454 [2024-07-25 09:10:45.325756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:51832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.454 [2024-07-25 09:10:45.325770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.454 [2024-07-25 09:10:45.325787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:51840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.454 [2024-07-25 09:10:45.325800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.454 [2024-07-25 09:10:45.325816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:51848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.454 [2024-07-25 09:10:45.325846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.454 [2024-07-25 09:10:45.325863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:51856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.454 [2024-07-25 09:10:45.325877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.454 [2024-07-25 09:10:45.325896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:51864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.454 [2024-07-25 09:10:45.325910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.454 [2024-07-25 09:10:45.325926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:51872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.454 [2024-07-25 09:10:45.325941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.454 [2024-07-25 09:10:45.325969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:51880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.454 [2024-07-25 09:10:45.325985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.454 [2024-07-25 09:10:45.326002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:51888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.454 [2024-07-25 09:10:45.326016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.454 [2024-07-25 09:10:45.326033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:51896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.454 [2024-07-25 09:10:45.326101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.454 [2024-07-25 09:10:45.326117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:51904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.454 [2024-07-25 09:10:45.326131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.454 [2024-07-25 09:10:45.326162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:51912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.454 [2024-07-25 09:10:45.326192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.454 [2024-07-25 09:10:45.326209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:51920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.454 [2024-07-25 09:10:45.326223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.454 [2024-07-25 09:10:45.326239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:51928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.454 [2024-07-25 09:10:45.326268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.454 [2024-07-25 09:10:45.326285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:51936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.454 [2024-07-25 09:10:45.326299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.454 [2024-07-25 09:10:45.326315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:51944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.454 [2024-07-25 09:10:45.326329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.454 [2024-07-25 09:10:45.326345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:51952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.455 [2024-07-25 09:10:45.326358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.455 [2024-07-25 09:10:45.326374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:51960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.455 [2024-07-25 09:10:45.326388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.455 [2024-07-25 09:10:45.326404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:51968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.455 [2024-07-25 09:10:45.326418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.455 [2024-07-25 09:10:45.326436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:51976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.455 [2024-07-25 09:10:45.326453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.455 [2024-07-25 09:10:45.326469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:51984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.455 [2024-07-25 09:10:45.326494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.455 [2024-07-25 09:10:45.326524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:51992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.455 [2024-07-25 09:10:45.326537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.455 [2024-07-25 09:10:45.326564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:52000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.455 [2024-07-25 09:10:45.326577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.455 [2024-07-25 09:10:45.326608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:52008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.455 [2024-07-25 09:10:45.326626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.455 [2024-07-25 09:10:45.326642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:52016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.455 [2024-07-25 09:10:45.326656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.455 [2024-07-25 09:10:45.326671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:52024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.455 [2024-07-25 09:10:45.326685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.455 [2024-07-25 09:10:45.326701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:52032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.455 [2024-07-25 09:10:45.326715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.455 [2024-07-25 09:10:45.326730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:52040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.455 [2024-07-25 09:10:45.326744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.455 [2024-07-25 09:10:45.326760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:52048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.455 [2024-07-25 09:10:45.326773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.455 [2024-07-25 09:10:45.326789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:52056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.455 [2024-07-25 09:10:45.326803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.455 [2024-07-25 09:10:45.326820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:52064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.455 [2024-07-25 09:10:45.326834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.455 [2024-07-25 09:10:45.326850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:52072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.455 [2024-07-25 09:10:45.326880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.455 [2024-07-25 09:10:45.326897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:52080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.455 [2024-07-25 09:10:45.326911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.455 [2024-07-25 09:10:45.326959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:52088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.455 [2024-07-25 09:10:45.326973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.455 [2024-07-25 09:10:45.327002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:52096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.455 [2024-07-25 09:10:45.327023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.455 [2024-07-25 09:10:45.327040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:52104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.455 [2024-07-25 09:10:45.327054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.455 [2024-07-25 09:10:45.327070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:52112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.455 [2024-07-25 09:10:45.327084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.455 [2024-07-25 09:10:45.327101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:52120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.455 [2024-07-25 09:10:45.327115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.455 [2024-07-25 09:10:45.327131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:52128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.455 [2024-07-25 09:10:45.327145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.455 [2024-07-25 09:10:45.327161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:52136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.455 [2024-07-25 09:10:45.327175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.455 [2024-07-25 09:10:45.327191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:52144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.455 [2024-07-25 09:10:45.327205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.455 [2024-07-25 09:10:45.327222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:52152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.455 [2024-07-25 09:10:45.327236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.455 [2024-07-25 09:10:45.327252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:52160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.455 [2024-07-25 09:10:45.327295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.455 [2024-07-25 09:10:45.327326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:52168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.455 [2024-07-25 09:10:45.327356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.455 [2024-07-25 09:10:45.327389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:52176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.455 [2024-07-25 09:10:45.327403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.455 [2024-07-25 09:10:45.327418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:52184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.455 [2024-07-25 09:10:45.327432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.455 [2024-07-25 09:10:45.327449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:52192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.455 [2024-07-25 09:10:45.327463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.455 [2024-07-25 09:10:45.327479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:52200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.455 [2024-07-25 09:10:45.327493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.455 [2024-07-25 09:10:45.327509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:52208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.455 [2024-07-25 09:10:45.327523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.455 [2024-07-25 09:10:45.327539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:52216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.455 [2024-07-25 09:10:45.327552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.455 [2024-07-25 09:10:45.327568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:52224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.455 [2024-07-25 09:10:45.327582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.455 [2024-07-25 09:10:45.327597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:52232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.455 [2024-07-25 09:10:45.327611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.456 [2024-07-25 09:10:45.327627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:52240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.456 [2024-07-25 09:10:45.327640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.456 [2024-07-25 09:10:45.327656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:52248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.456 [2024-07-25 09:10:45.327670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.456 [2024-07-25 09:10:45.327702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:52256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.456 [2024-07-25 09:10:45.327715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.456 [2024-07-25 09:10:45.327747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:52264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.456 [2024-07-25 09:10:45.327760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.456 [2024-07-25 09:10:45.327776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:52272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.456 [2024-07-25 09:10:45.327791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.456 [2024-07-25 09:10:45.327807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:52280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.456 [2024-07-25 09:10:45.327821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.456 [2024-07-25 09:10:45.327872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:52288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.456 [2024-07-25 09:10:45.327889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.456 [2024-07-25 09:10:45.327906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:52296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.456 [2024-07-25 09:10:45.327920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.456 [2024-07-25 09:10:45.327937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:52304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.456 [2024-07-25 09:10:45.327951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.456 [2024-07-25 09:10:45.327967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:52312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.456 [2024-07-25 09:10:45.327981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.456 [2024-07-25 09:10:45.328000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:52320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.456 [2024-07-25 09:10:45.328014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.456 [2024-07-25 09:10:45.328031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:52328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.456 [2024-07-25 09:10:45.328045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.456 [2024-07-25 09:10:45.328061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:52336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.456 [2024-07-25 09:10:45.328075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.456 [2024-07-25 09:10:45.328092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:52344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.456 [2024-07-25 09:10:45.328105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.456 [2024-07-25 09:10:45.328131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:52352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.456 [2024-07-25 09:10:45.328145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.456 [2024-07-25 09:10:45.328185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:52360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.456 [2024-07-25 09:10:45.328199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.456 [2024-07-25 09:10:45.328214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:51368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.456 [2024-07-25 09:10:45.328244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.456 [2024-07-25 09:10:45.328262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:51376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.456 [2024-07-25 09:10:45.328306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.456 [2024-07-25 09:10:45.328330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:51384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.456 [2024-07-25 09:10:45.328343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.456 [2024-07-25 09:10:45.328359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:51392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.456 [2024-07-25 09:10:45.328389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.456 [2024-07-25 09:10:45.328406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:51400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.456 [2024-07-25 09:10:45.328420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.456 [2024-07-25 09:10:45.328437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:51408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.456 [2024-07-25 09:10:45.328473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.456 [2024-07-25 09:10:45.328490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:51416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.456 [2024-07-25 09:10:45.328504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.456 [2024-07-25 09:10:45.328521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:51424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.456 [2024-07-25 09:10:45.328535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.456 [2024-07-25 09:10:45.328551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:51432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.456 [2024-07-25 09:10:45.328565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.456 [2024-07-25 09:10:45.328582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:51440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.456 [2024-07-25 09:10:45.328596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.456 [2024-07-25 09:10:45.328613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:51448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.456 [2024-07-25 09:10:45.328628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.456 [2024-07-25 09:10:45.328645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:51456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.456 [2024-07-25 09:10:45.328688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.456 [2024-07-25 09:10:45.328736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:51464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.456 [2024-07-25 09:10:45.328749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.456 [2024-07-25 09:10:45.328781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:51472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.456 [2024-07-25 09:10:45.328796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.456 [2024-07-25 09:10:45.328812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:51480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.456 [2024-07-25 09:10:45.328825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.456 [2024-07-25 09:10:45.328852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:52368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.456 [2024-07-25 09:10:45.328876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.456 [2024-07-25 09:10:45.328892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:52376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.456 [2024-07-25 09:10:45.328906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.456 [2024-07-25 09:10:45.328933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:51488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.456 [2024-07-25 09:10:45.328947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.456 [2024-07-25 09:10:45.328973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:51496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.456 [2024-07-25 09:10:45.328988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.456 [2024-07-25 09:10:45.329004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:51504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.456 [2024-07-25 09:10:45.329018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.457 [2024-07-25 09:10:45.329034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:51512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.457 [2024-07-25 09:10:45.329048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.457 [2024-07-25 09:10:45.329064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:51520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.457 [2024-07-25 09:10:45.329077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.457 [2024-07-25 09:10:45.329115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:51528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.457 [2024-07-25 09:10:45.329129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.457 [2024-07-25 09:10:45.329161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:51536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.457 [2024-07-25 09:10:45.329175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.457 [2024-07-25 09:10:45.329192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:52384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:38.457 [2024-07-25 09:10:45.329205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.457 [2024-07-25 09:10:45.329221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:51544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.457 [2024-07-25 09:10:45.329235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.457 [2024-07-25 09:10:45.329252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:51552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.457 [2024-07-25 09:10:45.329266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.457 [2024-07-25 09:10:45.329289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:51560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.457 [2024-07-25 09:10:45.329302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.457 [2024-07-25 09:10:45.329319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:51568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.457 [2024-07-25 09:10:45.329332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.457 [2024-07-25 09:10:45.329348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:51576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.457 [2024-07-25 09:10:45.329362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.457 [2024-07-25 09:10:45.329378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.457 [2024-07-25 09:10:45.329392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.457 [2024-07-25 09:10:45.329408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:51592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.457 [2024-07-25 09:10:45.329421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.457 [2024-07-25 09:10:45.329437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:51600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.457 [2024-07-25 09:10:45.329451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.457 [2024-07-25 09:10:45.329467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:51608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.457 [2024-07-25 09:10:45.329480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.457 [2024-07-25 09:10:45.329496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:51616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.457 [2024-07-25 09:10:45.329509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.457 [2024-07-25 09:10:45.329526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:51624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.457 [2024-07-25 09:10:45.329539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.457 [2024-07-25 09:10:45.329555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:51632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.457 [2024-07-25 09:10:45.329569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.457 [2024-07-25 09:10:45.329584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:51640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.457 [2024-07-25 09:10:45.329598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.457 [2024-07-25 09:10:45.329623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:51648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.457 [2024-07-25 09:10:45.329638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.457 [2024-07-25 09:10:45.329655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:51656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.457 [2024-07-25 09:10:45.329668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.457 [2024-07-25 09:10:45.329683] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b780 is same with the state(5) to be set 00:28:38.457 [2024-07-25 09:10:45.329702] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:38.457 [2024-07-25 09:10:45.329714] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.457 [2024-07-25 09:10:45.329728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51664 len:8 PRP1 0x0 PRP2 0x0 00:28:38.457 [2024-07-25 09:10:45.329742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.457 [2024-07-25 09:10:45.330044] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002b780 was disconnected and freed. reset controller. 00:28:38.457 [2024-07-25 09:10:45.330337] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.457 [2024-07-25 09:10:45.330448] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:28:38.457 [2024-07-25 09:10:45.330618] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.457 [2024-07-25 09:10:45.330648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:28:38.457 [2024-07-25 09:10:45.330664] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:28:38.457 [2024-07-25 09:10:45.330691] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:28:38.457 [2024-07-25 09:10:45.330716] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:38.457 [2024-07-25 09:10:45.330730] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:38.457 [2024-07-25 09:10:45.330746] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:38.457 [2024-07-25 09:10:45.330825] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.457 [2024-07-25 09:10:45.330841] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:38.457 09:10:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:28:39.392 [2024-07-25 09:10:46.331050] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.392 [2024-07-25 09:10:46.331143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:28:39.392 [2024-07-25 09:10:46.331167] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:28:39.392 [2024-07-25 09:10:46.331206] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:28:39.392 [2024-07-25 09:10:46.331236] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.392 [2024-07-25 09:10:46.331252] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.392 [2024-07-25 09:10:46.331268] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.392 [2024-07-25 09:10:46.331308] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.392 [2024-07-25 09:10:46.331327] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.328 [2024-07-25 09:10:47.331513] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.328 [2024-07-25 09:10:47.331613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:28:40.328 [2024-07-25 09:10:47.331637] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:28:40.328 [2024-07-25 09:10:47.331678] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:28:40.328 [2024-07-25 09:10:47.331708] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.328 [2024-07-25 09:10:47.331723] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.328 [2024-07-25 09:10:47.331740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.328 [2024-07-25 09:10:47.331779] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.328 [2024-07-25 09:10:47.331797] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.276 [2024-07-25 09:10:48.335530] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.276 [2024-07-25 09:10:48.335673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:28:41.276 [2024-07-25 09:10:48.335695] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:28:41.276 [2024-07-25 09:10:48.335998] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:28:41.276 [2024-07-25 09:10:48.336301] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.276 [2024-07-25 09:10:48.336321] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.276 [2024-07-25 09:10:48.336337] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.276 [2024-07-25 09:10:48.340661] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.276 [2024-07-25 09:10:48.340712] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.276 09:10:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:41.543 [2024-07-25 09:10:48.623105] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:41.543 09:10:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 88210 00:28:42.478 [2024-07-25 09:10:49.390028] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:47.753 00:28:47.753 Latency(us) 00:28:47.753 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:47.753 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:28:47.753 Verification LBA range: start 0x0 length 0x4000 00:28:47.753 NVMe0n1 : 10.01 4155.20 16.23 3464.55 0.00 16760.90 830.37 3019898.88 00:28:47.753 =================================================================================================================== 00:28:47.753 Total : 4155.20 16.23 3464.55 0.00 16760.90 0.00 3019898.88 00:28:47.753 0 00:28:47.753 09:10:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 88080 00:28:47.753 09:10:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 88080 ']' 00:28:47.753 09:10:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 88080 00:28:47.753 09:10:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:28:47.753 09:10:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:47.753 09:10:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88080 00:28:47.753 killing process with pid 88080 00:28:47.753 Received shutdown signal, test time was about 10.000000 seconds 00:28:47.753 00:28:47.753 Latency(us) 00:28:47.753 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:47.753 =================================================================================================================== 00:28:47.753 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:47.753 09:10:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:28:47.753 09:10:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:28:47.753 09:10:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88080' 00:28:47.753 09:10:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 88080 00:28:47.753 09:10:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 88080 00:28:48.318 09:10:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=88326 00:28:48.318 09:10:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:28:48.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:48.318 09:10:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 88326 /var/tmp/bdevperf.sock 00:28:48.318 09:10:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 88326 ']' 00:28:48.318 09:10:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:48.318 09:10:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:48.318 09:10:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:48.318 09:10:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:48.318 09:10:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:48.318 [2024-07-25 09:10:55.416223] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:48.318 [2024-07-25 09:10:55.416424] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88326 ] 00:28:48.576 [2024-07-25 09:10:55.576723] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:48.833 [2024-07-25 09:10:55.810166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:49.091 [2024-07-25 09:10:56.013318] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:28:49.349 09:10:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:49.349 09:10:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:28:49.349 09:10:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88326 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:28:49.349 09:10:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=88342 00:28:49.349 09:10:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:28:49.607 09:10:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:28:49.865 NVMe0n1 00:28:49.865 09:10:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=88382 00:28:49.865 09:10:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:49.865 09:10:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:28:50.123 Running I/O for 10 seconds... 00:28:51.118 09:10:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:51.118 [2024-07-25 09:10:58.121132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.118 [2024-07-25 09:10:58.121202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.118 [2024-07-25 09:10:58.121226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.118 [2024-07-25 09:10:58.121245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.118 [2024-07-25 09:10:58.121260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.118 [2024-07-25 09:10:58.121278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.118 [2024-07-25 09:10:58.121293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:51.118 [2024-07-25 09:10:58.121309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.118 [2024-07-25 09:10:58.121323] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:28:51.118 [2024-07-25 09:10:58.121634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:58152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.118 [2024-07-25 09:10:58.121660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.118 [2024-07-25 09:10:58.121699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:108744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.119 [2024-07-25 09:10:58.121716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.119 [2024-07-25 09:10:58.121736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:78104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.119 [2024-07-25 09:10:58.121751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.119 [2024-07-25 09:10:58.121769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:120896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.119 [2024-07-25 09:10:58.121784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.119 [2024-07-25 09:10:58.121806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:109720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.119 [2024-07-25 09:10:58.121840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.119 [2024-07-25 09:10:58.121863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:33048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.119 [2024-07-25 09:10:58.121878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.119 [2024-07-25 09:10:58.121897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:11472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.119 [2024-07-25 09:10:58.121912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.119 [2024-07-25 09:10:58.121939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:19784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.119 [2024-07-25 09:10:58.121953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.119 [2024-07-25 09:10:58.121981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:71608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.119 [2024-07-25 09:10:58.122001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.119 [2024-07-25 09:10:58.122023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:7928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.119 [2024-07-25 09:10:58.122037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.119 [2024-07-25 09:10:58.122056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:125600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.119 [2024-07-25 09:10:58.122071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.119 [2024-07-25 09:10:58.122089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:21704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.119 [2024-07-25 09:10:58.122103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.119 [2024-07-25 09:10:58.122122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:31424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.119 [2024-07-25 09:10:58.122138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.119 [2024-07-25 09:10:58.122159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:91784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.119 [2024-07-25 09:10:58.122173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.119 [2024-07-25 09:10:58.122192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:36232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.119 [2024-07-25 09:10:58.122207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.119 [2024-07-25 09:10:58.122226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:88464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.119 [2024-07-25 09:10:58.122240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.119 [2024-07-25 09:10:58.122259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:92520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.119 [2024-07-25 09:10:58.122274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.119 [2024-07-25 09:10:58.122297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:52584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.119 [2024-07-25 09:10:58.122312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.119 [2024-07-25 09:10:58.122340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:32264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.119 [2024-07-25 09:10:58.122358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.119 [2024-07-25 09:10:58.122381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:106520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.119 [2024-07-25 09:10:58.122395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.119 [2024-07-25 09:10:58.122414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:75008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.119 [2024-07-25 09:10:58.122429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.119 [2024-07-25 09:10:58.122452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:109264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.119 [2024-07-25 09:10:58.122467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.119 [2024-07-25 09:10:58.122486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:129576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.119 [2024-07-25 09:10:58.122500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.119 [2024-07-25 09:10:58.122520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:46656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.119 [2024-07-25 09:10:58.122534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.119 [2024-07-25 09:10:58.122553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:44104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.119 [2024-07-25 09:10:58.122567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.119 [2024-07-25 09:10:58.122602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:26448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.119 [2024-07-25 09:10:58.122617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.119 [2024-07-25 09:10:58.122636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:91512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.119 [2024-07-25 09:10:58.122650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.119 [2024-07-25 09:10:58.122669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:25448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.119 [2024-07-25 09:10:58.122684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.119 [2024-07-25 09:10:58.122703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:109456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.119 [2024-07-25 09:10:58.122717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.119 [2024-07-25 09:10:58.122737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:3352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.119 [2024-07-25 09:10:58.122752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.119 [2024-07-25 09:10:58.122771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:80096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.119 [2024-07-25 09:10:58.122786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.119 [2024-07-25 09:10:58.122806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:30840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.119 [2024-07-25 09:10:58.122832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.119 [2024-07-25 09:10:58.122853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:68752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.119 [2024-07-25 09:10:58.122868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.119 [2024-07-25 09:10:58.122890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:79816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.119 [2024-07-25 09:10:58.122905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.119 [2024-07-25 09:10:58.122924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:94976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.119 [2024-07-25 09:10:58.122939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.119 [2024-07-25 09:10:58.122968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:65480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.119 [2024-07-25 09:10:58.122986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.119 [2024-07-25 09:10:58.123005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:42632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.119 [2024-07-25 09:10:58.123020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.120 [2024-07-25 09:10:58.123039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:58488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.120 [2024-07-25 09:10:58.123054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.120 [2024-07-25 09:10:58.123074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:26568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.120 [2024-07-25 09:10:58.123088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.120 [2024-07-25 09:10:58.123108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:109904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.120 [2024-07-25 09:10:58.123123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.120 [2024-07-25 09:10:58.123142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:85576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.120 [2024-07-25 09:10:58.123156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.120 [2024-07-25 09:10:58.123178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:123368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.120 [2024-07-25 09:10:58.123192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.120 [2024-07-25 09:10:58.123211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:37064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.120 [2024-07-25 09:10:58.123226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.120 [2024-07-25 09:10:58.123246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:11880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.120 [2024-07-25 09:10:58.123260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.120 [2024-07-25 09:10:58.123299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:34312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.120 [2024-07-25 09:10:58.123314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.120 [2024-07-25 09:10:58.123351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:57376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.120 [2024-07-25 09:10:58.123370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.120 [2024-07-25 09:10:58.123391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:15616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.120 [2024-07-25 09:10:58.123405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.120 [2024-07-25 09:10:58.123424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:23024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.120 [2024-07-25 09:10:58.123439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.120 [2024-07-25 09:10:58.123458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:109976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.120 [2024-07-25 09:10:58.123472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.120 [2024-07-25 09:10:58.123499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:57552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.120 [2024-07-25 09:10:58.123514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.120 [2024-07-25 09:10:58.123533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:106744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.120 [2024-07-25 09:10:58.123547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.120 [2024-07-25 09:10:58.123566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:16472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.120 [2024-07-25 09:10:58.123586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.120 [2024-07-25 09:10:58.123605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:119344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.120 [2024-07-25 09:10:58.123620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.120 [2024-07-25 09:10:58.123639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:26400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.120 [2024-07-25 09:10:58.123653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.120 [2024-07-25 09:10:58.123673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:83632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.120 [2024-07-25 09:10:58.123687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.120 [2024-07-25 09:10:58.123707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:88952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.120 [2024-07-25 09:10:58.123721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.120 [2024-07-25 09:10:58.123743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:48800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.120 [2024-07-25 09:10:58.123758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.120 [2024-07-25 09:10:58.123781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:34736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.120 [2024-07-25 09:10:58.123796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.120 [2024-07-25 09:10:58.123837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:73120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.120 [2024-07-25 09:10:58.123855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.120 [2024-07-25 09:10:58.123875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:114360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.120 [2024-07-25 09:10:58.123889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.120 [2024-07-25 09:10:58.123908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:92968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.120 [2024-07-25 09:10:58.123923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.120 [2024-07-25 09:10:58.123942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:6304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.120 [2024-07-25 09:10:58.123957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.120 [2024-07-25 09:10:58.123976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:125672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.120 [2024-07-25 09:10:58.123990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.120 [2024-07-25 09:10:58.124009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:67328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.120 [2024-07-25 09:10:58.124031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.120 [2024-07-25 09:10:58.124050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:77192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.120 [2024-07-25 09:10:58.124064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.120 [2024-07-25 09:10:58.124090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:106904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.120 [2024-07-25 09:10:58.124107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.120 [2024-07-25 09:10:58.124126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:96504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.120 [2024-07-25 09:10:58.124141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.120 [2024-07-25 09:10:58.124160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:76464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.120 [2024-07-25 09:10:58.124174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.120 [2024-07-25 09:10:58.124193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:92416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.120 [2024-07-25 09:10:58.124207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.120 [2024-07-25 09:10:58.124226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:19392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.120 [2024-07-25 09:10:58.124240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.120 [2024-07-25 09:10:58.124261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:34776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.120 [2024-07-25 09:10:58.124275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.120 [2024-07-25 09:10:58.124295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:117168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.120 [2024-07-25 09:10:58.124310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.120 [2024-07-25 09:10:58.124328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:81512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.121 [2024-07-25 09:10:58.124343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.121 [2024-07-25 09:10:58.124364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:29096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.121 [2024-07-25 09:10:58.124379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.121 [2024-07-25 09:10:58.124397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:72968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.121 [2024-07-25 09:10:58.124412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.121 [2024-07-25 09:10:58.124431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:56672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.121 [2024-07-25 09:10:58.124445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.121 [2024-07-25 09:10:58.124465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:15344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.121 [2024-07-25 09:10:58.124480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.121 [2024-07-25 09:10:58.124498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:94104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.121 [2024-07-25 09:10:58.124513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.121 [2024-07-25 09:10:58.124532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:119696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.121 [2024-07-25 09:10:58.124546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.121 [2024-07-25 09:10:58.124565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:18600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.121 [2024-07-25 09:10:58.124579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.121 [2024-07-25 09:10:58.124598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:101728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.121 [2024-07-25 09:10:58.124612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.121 [2024-07-25 09:10:58.124634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:103184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.121 [2024-07-25 09:10:58.124648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.121 [2024-07-25 09:10:58.124672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:44032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.121 [2024-07-25 09:10:58.124687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.121 [2024-07-25 09:10:58.124707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:116712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.121 [2024-07-25 09:10:58.124721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.121 [2024-07-25 09:10:58.124741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:52224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.121 [2024-07-25 09:10:58.124755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.121 [2024-07-25 09:10:58.124774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:11080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.121 [2024-07-25 09:10:58.124789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.121 [2024-07-25 09:10:58.124808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:15432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.121 [2024-07-25 09:10:58.124834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.121 [2024-07-25 09:10:58.124855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:57600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.121 [2024-07-25 09:10:58.124870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.121 [2024-07-25 09:10:58.124889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:52880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.121 [2024-07-25 09:10:58.124904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.121 [2024-07-25 09:10:58.124935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:104672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.121 [2024-07-25 09:10:58.124949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.121 [2024-07-25 09:10:58.124974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:40952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.121 [2024-07-25 09:10:58.124988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.121 [2024-07-25 09:10:58.125007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:127840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.121 [2024-07-25 09:10:58.125022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.121 [2024-07-25 09:10:58.125041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:72024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.121 [2024-07-25 09:10:58.125056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.121 [2024-07-25 09:10:58.125075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:65024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.121 [2024-07-25 09:10:58.125089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.121 [2024-07-25 09:10:58.125108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:26632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.121 [2024-07-25 09:10:58.125123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.121 [2024-07-25 09:10:58.125148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:10216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.121 [2024-07-25 09:10:58.125163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.121 [2024-07-25 09:10:58.125181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:69152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.121 [2024-07-25 09:10:58.125196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.121 [2024-07-25 09:10:58.125219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:60992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.121 [2024-07-25 09:10:58.125234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.121 [2024-07-25 09:10:58.125253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.121 [2024-07-25 09:10:58.125268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.121 [2024-07-25 09:10:58.125286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:123120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.121 [2024-07-25 09:10:58.125301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.121 [2024-07-25 09:10:58.125331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:130080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.121 [2024-07-25 09:10:58.125352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.121 [2024-07-25 09:10:58.125389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:24272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.121 [2024-07-25 09:10:58.125403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.121 [2024-07-25 09:10:58.125431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:21760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.121 [2024-07-25 09:10:58.125445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.121 [2024-07-25 09:10:58.125465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:43624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.121 [2024-07-25 09:10:58.125479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.121 [2024-07-25 09:10:58.125498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:77856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.121 [2024-07-25 09:10:58.125513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.121 [2024-07-25 09:10:58.125535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:128592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.121 [2024-07-25 09:10:58.125549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.121 [2024-07-25 09:10:58.125569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:104288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.121 [2024-07-25 09:10:58.125590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.121 [2024-07-25 09:10:58.125611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.121 [2024-07-25 09:10:58.125625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.121 [2024-07-25 09:10:58.125659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:124008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.122 [2024-07-25 09:10:58.125674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.122 [2024-07-25 09:10:58.125695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:48352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.122 [2024-07-25 09:10:58.125710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.122 [2024-07-25 09:10:58.125730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:80192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.122 [2024-07-25 09:10:58.125745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.122 [2024-07-25 09:10:58.125764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:69048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.122 [2024-07-25 09:10:58.125779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.122 [2024-07-25 09:10:58.125798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:102592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.122 [2024-07-25 09:10:58.125823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.122 [2024-07-25 09:10:58.125848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:93224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.122 [2024-07-25 09:10:58.125863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.122 [2024-07-25 09:10:58.125882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:116512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.122 [2024-07-25 09:10:58.125897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.122 [2024-07-25 09:10:58.125916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:130768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.122 [2024-07-25 09:10:58.125930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.122 [2024-07-25 09:10:58.125949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:88680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.122 [2024-07-25 09:10:58.125963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.122 [2024-07-25 09:10:58.125983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:48976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.122 [2024-07-25 09:10:58.125997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.122 [2024-07-25 09:10:58.126016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.122 [2024-07-25 09:10:58.126030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.122 [2024-07-25 09:10:58.126050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:85736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.122 [2024-07-25 09:10:58.126064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.122 [2024-07-25 09:10:58.126083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:42008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.122 [2024-07-25 09:10:58.126098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.122 [2024-07-25 09:10:58.126119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:107248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.122 [2024-07-25 09:10:58.126134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.122 [2024-07-25 09:10:58.126155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.122 [2024-07-25 09:10:58.126169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.122 [2024-07-25 09:10:58.126188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:67928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.122 [2024-07-25 09:10:58.126203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.122 [2024-07-25 09:10:58.126223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:48680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.122 [2024-07-25 09:10:58.126238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.122 [2024-07-25 09:10:58.126257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:50488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.122 [2024-07-25 09:10:58.126272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.122 [2024-07-25 09:10:58.126297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:38976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.122 [2024-07-25 09:10:58.126312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.122 [2024-07-25 09:10:58.126330] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b000 is same with the state(5) to be set 00:28:51.122 [2024-07-25 09:10:58.126358] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:51.122 [2024-07-25 09:10:58.126375] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:51.122 [2024-07-25 09:10:58.126394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33568 len:8 PRP1 0x0 PRP2 0x0 00:28:51.122 [2024-07-25 09:10:58.126414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.122 [2024-07-25 09:10:58.126680] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002b000 was disconnected and freed. reset controller. 00:28:51.122 [2024-07-25 09:10:58.127032] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.122 [2024-07-25 09:10:58.127084] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:28:51.122 [2024-07-25 09:10:58.127225] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.122 [2024-07-25 09:10:58.127267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:28:51.122 [2024-07-25 09:10:58.127287] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:28:51.122 [2024-07-25 09:10:58.127326] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:28:51.122 [2024-07-25 09:10:58.127364] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.122 [2024-07-25 09:10:58.127384] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.122 [2024-07-25 09:10:58.127401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.122 [2024-07-25 09:10:58.127437] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.122 [2024-07-25 09:10:58.127456] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.122 09:10:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 88382 00:28:53.020 [2024-07-25 09:11:00.127720] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.020 [2024-07-25 09:11:00.127877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:28:53.020 [2024-07-25 09:11:00.127905] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:28:53.020 [2024-07-25 09:11:00.127950] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:28:53.020 [2024-07-25 09:11:00.127996] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:53.021 [2024-07-25 09:11:00.128028] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:53.021 [2024-07-25 09:11:00.128044] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:53.021 [2024-07-25 09:11:00.128091] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:53.021 [2024-07-25 09:11:00.128110] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.552 [2024-07-25 09:11:02.128317] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.552 [2024-07-25 09:11:02.128402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:28:55.552 [2024-07-25 09:11:02.128432] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:28:55.552 [2024-07-25 09:11:02.128475] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:28:55.552 [2024-07-25 09:11:02.128505] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.552 [2024-07-25 09:11:02.128522] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.552 [2024-07-25 09:11:02.128539] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.552 [2024-07-25 09:11:02.128599] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.552 [2024-07-25 09:11:02.128619] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.454 [2024-07-25 09:11:04.128733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.454 [2024-07-25 09:11:04.128834] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.454 [2024-07-25 09:11:04.128858] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.454 [2024-07-25 09:11:04.128874] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:28:57.454 [2024-07-25 09:11:04.128925] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.023 00:28:58.023 Latency(us) 00:28:58.023 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:58.023 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:28:58.023 NVMe0n1 : 8.12 1580.18 6.17 15.77 0.00 80106.02 9770.82 7046430.72 00:28:58.023 =================================================================================================================== 00:28:58.023 Total : 1580.18 6.17 15.77 0.00 80106.02 9770.82 7046430.72 00:28:58.023 0 00:28:58.282 09:11:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:58.282 Attaching 5 probes... 00:28:58.282 1251.868879: reset bdev controller NVMe0 00:28:58.282 1251.987342: reconnect bdev controller NVMe0 00:28:58.282 3252.370662: reconnect delay bdev controller NVMe0 00:28:58.282 3252.396461: reconnect bdev controller NVMe0 00:28:58.282 5253.017555: reconnect delay bdev controller NVMe0 00:28:58.282 5253.041872: reconnect bdev controller NVMe0 00:28:58.282 7253.526749: reconnect delay bdev controller NVMe0 00:28:58.282 7253.552616: reconnect bdev controller NVMe0 00:28:58.282 09:11:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:28:58.282 09:11:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:28:58.283 09:11:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 88342 00:28:58.283 09:11:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:58.283 09:11:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 88326 00:28:58.283 09:11:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 88326 ']' 00:28:58.283 09:11:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 88326 00:28:58.283 09:11:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:28:58.283 09:11:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:58.283 09:11:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88326 00:28:58.283 killing process with pid 88326 00:28:58.283 Received shutdown signal, test time was about 8.175109 seconds 00:28:58.283 00:28:58.283 Latency(us) 00:28:58.283 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:58.283 =================================================================================================================== 00:28:58.283 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:58.283 09:11:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:28:58.283 09:11:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:28:58.283 09:11:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88326' 00:28:58.283 09:11:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 88326 00:28:58.283 09:11:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 88326 00:28:59.660 09:11:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:59.660 09:11:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:28:59.660 09:11:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:28:59.660 09:11:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:59.660 09:11:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@117 -- # sync 00:28:59.660 09:11:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:59.660 09:11:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@120 -- # set +e 00:28:59.660 09:11:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:59.660 09:11:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:59.660 rmmod nvme_tcp 00:28:59.660 rmmod nvme_fabrics 00:28:59.660 rmmod nvme_keyring 00:28:59.660 09:11:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:59.660 09:11:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set -e 00:28:59.660 09:11:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # return 0 00:28:59.660 09:11:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@489 -- # '[' -n 87875 ']' 00:28:59.660 09:11:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@490 -- # killprocess 87875 00:28:59.660 09:11:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 87875 ']' 00:28:59.660 09:11:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 87875 00:28:59.660 09:11:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:28:59.660 09:11:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:59.660 09:11:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87875 00:28:59.660 killing process with pid 87875 00:28:59.660 09:11:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:59.660 09:11:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:59.660 09:11:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87875' 00:28:59.660 09:11:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 87875 00:28:59.660 09:11:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 87875 00:29:01.037 09:11:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:01.037 09:11:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:01.037 09:11:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:01.037 09:11:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:01.037 09:11:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:01.037 09:11:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:01.037 09:11:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:01.037 09:11:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:01.037 09:11:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:29:01.037 00:29:01.037 real 0m51.097s 00:29:01.037 user 2m28.336s 00:29:01.037 sys 0m5.791s 00:29:01.037 09:11:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:01.037 ************************************ 00:29:01.037 END TEST nvmf_timeout 00:29:01.037 09:11:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:01.037 ************************************ 00:29:01.296 09:11:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:29:01.296 09:11:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:29:01.296 00:29:01.296 real 6m29.317s 00:29:01.296 user 17m57.441s 00:29:01.296 sys 1m18.573s 00:29:01.296 09:11:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:01.296 09:11:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.296 ************************************ 00:29:01.296 END TEST nvmf_host 00:29:01.296 ************************************ 00:29:01.296 ************************************ 00:29:01.296 END TEST nvmf_tcp 00:29:01.296 ************************************ 00:29:01.296 00:29:01.296 real 16m36.374s 00:29:01.296 user 43m33.203s 00:29:01.296 sys 4m0.355s 00:29:01.296 09:11:08 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:01.296 09:11:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:01.296 09:11:08 -- spdk/autotest.sh@292 -- # [[ 1 -eq 0 ]] 00:29:01.296 09:11:08 -- spdk/autotest.sh@296 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:29:01.296 09:11:08 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:01.296 09:11:08 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:01.296 09:11:08 -- common/autotest_common.sh@10 -- # set +x 00:29:01.296 ************************************ 00:29:01.296 START TEST nvmf_dif 00:29:01.296 ************************************ 00:29:01.296 09:11:08 nvmf_dif -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:29:01.296 * Looking for test storage... 00:29:01.296 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:29:01.296 09:11:08 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:01.296 09:11:08 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:29:01.296 09:11:08 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:01.296 09:11:08 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:01.296 09:11:08 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:01.296 09:11:08 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:01.296 09:11:08 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:01.296 09:11:08 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:01.296 09:11:08 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:01.296 09:11:08 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:01.296 09:11:08 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:01.296 09:11:08 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:01.296 09:11:08 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:29:01.296 09:11:08 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:29:01.296 09:11:08 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:01.296 09:11:08 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:01.296 09:11:08 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:01.296 09:11:08 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:01.296 09:11:08 nvmf_dif -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:01.296 09:11:08 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:01.296 09:11:08 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:01.296 09:11:08 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:01.296 09:11:08 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.296 09:11:08 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.296 09:11:08 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.296 09:11:08 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:29:01.296 09:11:08 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.296 09:11:08 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:29:01.296 09:11:08 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:01.296 09:11:08 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:01.296 09:11:08 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:01.296 09:11:08 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:01.296 09:11:08 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:01.296 09:11:08 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:01.296 09:11:08 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:01.296 09:11:08 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:01.296 09:11:08 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:29:01.296 09:11:08 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:29:01.296 09:11:08 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:29:01.296 09:11:08 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:29:01.296 09:11:08 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:29:01.296 09:11:08 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:01.296 09:11:08 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:01.296 09:11:08 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:01.296 09:11:08 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:01.296 09:11:08 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:01.296 09:11:08 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:01.296 09:11:08 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:01.296 09:11:08 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:01.296 09:11:08 nvmf_dif -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:29:01.296 09:11:08 nvmf_dif -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:29:01.296 09:11:08 nvmf_dif -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:29:01.296 09:11:08 nvmf_dif -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:29:01.296 09:11:08 nvmf_dif -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:29:01.296 09:11:08 nvmf_dif -- nvmf/common.sh@432 -- # nvmf_veth_init 00:29:01.296 09:11:08 nvmf_dif -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:01.296 09:11:08 nvmf_dif -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:01.296 09:11:08 nvmf_dif -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:29:01.296 09:11:08 nvmf_dif -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:29:01.296 09:11:08 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:01.296 09:11:08 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:01.296 09:11:08 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:01.296 09:11:08 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:01.296 09:11:08 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:01.296 09:11:08 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:01.296 09:11:08 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:01.296 09:11:08 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:01.297 09:11:08 nvmf_dif -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:29:01.297 09:11:08 nvmf_dif -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:29:01.297 Cannot find device "nvmf_tgt_br" 00:29:01.297 09:11:08 nvmf_dif -- nvmf/common.sh@155 -- # true 00:29:01.297 09:11:08 nvmf_dif -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:29:01.297 Cannot find device "nvmf_tgt_br2" 00:29:01.297 09:11:08 nvmf_dif -- nvmf/common.sh@156 -- # true 00:29:01.297 09:11:08 nvmf_dif -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:29:01.555 09:11:08 nvmf_dif -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:29:01.555 Cannot find device "nvmf_tgt_br" 00:29:01.555 09:11:08 nvmf_dif -- nvmf/common.sh@158 -- # true 00:29:01.555 09:11:08 nvmf_dif -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:29:01.555 Cannot find device "nvmf_tgt_br2" 00:29:01.555 09:11:08 nvmf_dif -- nvmf/common.sh@159 -- # true 00:29:01.555 09:11:08 nvmf_dif -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:29:01.555 09:11:08 nvmf_dif -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:29:01.556 09:11:08 nvmf_dif -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:01.556 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:01.556 09:11:08 nvmf_dif -- nvmf/common.sh@162 -- # true 00:29:01.556 09:11:08 nvmf_dif -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:01.556 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:01.556 09:11:08 nvmf_dif -- nvmf/common.sh@163 -- # true 00:29:01.556 09:11:08 nvmf_dif -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:29:01.556 09:11:08 nvmf_dif -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:01.556 09:11:08 nvmf_dif -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:01.556 09:11:08 nvmf_dif -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:01.556 09:11:08 nvmf_dif -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:01.556 09:11:08 nvmf_dif -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:01.556 09:11:08 nvmf_dif -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:01.556 09:11:08 nvmf_dif -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:29:01.556 09:11:08 nvmf_dif -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:29:01.556 09:11:08 nvmf_dif -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:29:01.556 09:11:08 nvmf_dif -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:29:01.556 09:11:08 nvmf_dif -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:29:01.556 09:11:08 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:29:01.556 09:11:08 nvmf_dif -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:01.556 09:11:08 nvmf_dif -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:01.556 09:11:08 nvmf_dif -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:01.556 09:11:08 nvmf_dif -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:29:01.556 09:11:08 nvmf_dif -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:29:01.556 09:11:08 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:29:01.556 09:11:08 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:01.556 09:11:08 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:01.814 09:11:08 nvmf_dif -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:01.814 09:11:08 nvmf_dif -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:01.814 09:11:08 nvmf_dif -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:29:01.814 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:01.814 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:29:01.814 00:29:01.814 --- 10.0.0.2 ping statistics --- 00:29:01.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:01.814 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:29:01.814 09:11:08 nvmf_dif -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:29:01.814 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:01.814 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:29:01.814 00:29:01.814 --- 10.0.0.3 ping statistics --- 00:29:01.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:01.814 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:29:01.814 09:11:08 nvmf_dif -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:01.814 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:01.814 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:29:01.814 00:29:01.814 --- 10.0.0.1 ping statistics --- 00:29:01.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:01.814 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:29:01.814 09:11:08 nvmf_dif -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:01.814 09:11:08 nvmf_dif -- nvmf/common.sh@433 -- # return 0 00:29:01.814 09:11:08 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:29:01.814 09:11:08 nvmf_dif -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:29:02.073 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:02.073 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:29:02.073 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:29:02.073 09:11:09 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:02.073 09:11:09 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:02.073 09:11:09 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:02.073 09:11:09 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:02.073 09:11:09 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:02.073 09:11:09 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:02.073 09:11:09 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:29:02.073 09:11:09 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:29:02.073 09:11:09 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:02.073 09:11:09 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:02.073 09:11:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:02.073 09:11:09 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=88837 00:29:02.073 09:11:09 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:29:02.073 09:11:09 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 88837 00:29:02.073 09:11:09 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 88837 ']' 00:29:02.073 09:11:09 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:02.073 09:11:09 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:02.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:02.073 09:11:09 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:02.073 09:11:09 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:02.073 09:11:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:02.332 [2024-07-25 09:11:09.223616] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:29:02.332 [2024-07-25 09:11:09.223808] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:02.332 [2024-07-25 09:11:09.387239] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:02.591 [2024-07-25 09:11:09.621286] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:02.591 [2024-07-25 09:11:09.621382] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:02.591 [2024-07-25 09:11:09.621399] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:02.591 [2024-07-25 09:11:09.621414] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:02.591 [2024-07-25 09:11:09.621440] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:02.591 [2024-07-25 09:11:09.621491] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:02.849 [2024-07-25 09:11:09.807809] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:29:03.108 09:11:10 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:03.108 09:11:10 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:29:03.108 09:11:10 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:03.108 09:11:10 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:03.108 09:11:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:03.108 09:11:10 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:03.108 09:11:10 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:29:03.108 09:11:10 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:29:03.367 09:11:10 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.367 09:11:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:03.367 [2024-07-25 09:11:10.225890] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:03.367 09:11:10 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:03.367 09:11:10 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:29:03.367 09:11:10 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:03.367 09:11:10 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:03.367 09:11:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:03.367 ************************************ 00:29:03.367 START TEST fio_dif_1_default 00:29:03.367 ************************************ 00:29:03.367 09:11:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:29:03.367 09:11:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:29:03.367 09:11:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:29:03.367 09:11:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:29:03.367 09:11:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:29:03.367 09:11:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:29:03.367 09:11:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:29:03.367 09:11:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.367 09:11:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:03.367 bdev_null0 00:29:03.367 09:11:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:03.367 09:11:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:03.367 09:11:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.367 09:11:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:03.367 09:11:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:03.367 09:11:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:03.367 09:11:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.367 09:11:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:03.367 09:11:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:03.367 09:11:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:03.367 09:11:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.367 09:11:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:03.367 [2024-07-25 09:11:10.270038] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:03.367 09:11:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:03.367 09:11:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:29:03.367 09:11:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:29:03.367 09:11:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:29:03.367 09:11:10 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:29:03.367 09:11:10 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:29:03.367 09:11:10 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:03.367 09:11:10 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:03.367 { 00:29:03.367 "params": { 00:29:03.367 "name": "Nvme$subsystem", 00:29:03.367 "trtype": "$TEST_TRANSPORT", 00:29:03.367 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:03.367 "adrfam": "ipv4", 00:29:03.367 "trsvcid": "$NVMF_PORT", 00:29:03.367 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:03.367 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:03.367 "hdgst": ${hdgst:-false}, 00:29:03.367 "ddgst": ${ddgst:-false} 00:29:03.367 }, 00:29:03.367 "method": "bdev_nvme_attach_controller" 00:29:03.367 } 00:29:03.367 EOF 00:29:03.367 )") 00:29:03.367 09:11:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:03.367 09:11:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:03.367 09:11:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:03.367 09:11:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:03.367 09:11:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:03.367 09:11:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:03.367 09:11:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:29:03.367 09:11:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:03.367 09:11:10 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:29:03.367 09:11:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:03.367 09:11:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:29:03.367 09:11:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:29:03.367 09:11:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:29:03.367 09:11:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:03.367 09:11:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:29:03.367 09:11:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:03.367 09:11:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:29:03.367 09:11:10 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:29:03.367 09:11:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:29:03.367 09:11:10 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:29:03.367 09:11:10 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:03.367 "params": { 00:29:03.367 "name": "Nvme0", 00:29:03.367 "trtype": "tcp", 00:29:03.367 "traddr": "10.0.0.2", 00:29:03.367 "adrfam": "ipv4", 00:29:03.367 "trsvcid": "4420", 00:29:03.367 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:03.368 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:03.368 "hdgst": false, 00:29:03.368 "ddgst": false 00:29:03.368 }, 00:29:03.368 "method": "bdev_nvme_attach_controller" 00:29:03.368 }' 00:29:03.368 09:11:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:29:03.368 09:11:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:29:03.368 09:11:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # break 00:29:03.368 09:11:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:29:03.368 09:11:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:03.627 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:29:03.627 fio-3.35 00:29:03.627 Starting 1 thread 00:29:15.841 00:29:15.841 filename0: (groupid=0, jobs=1): err= 0: pid=88900: Thu Jul 25 09:11:21 2024 00:29:15.841 read: IOPS=6761, BW=26.4MiB/s (27.7MB/s)(264MiB/10001msec) 00:29:15.841 slat (nsec): min=7495, max=97396, avg=11261.31, stdev=5541.64 00:29:15.841 clat (usec): min=435, max=2252, avg=557.29, stdev=51.37 00:29:15.842 lat (usec): min=443, max=2266, avg=568.56, stdev=52.36 00:29:15.842 clat percentiles (usec): 00:29:15.842 | 1.00th=[ 465], 5.00th=[ 486], 10.00th=[ 498], 20.00th=[ 515], 00:29:15.842 | 30.00th=[ 529], 40.00th=[ 545], 50.00th=[ 553], 60.00th=[ 570], 00:29:15.842 | 70.00th=[ 578], 80.00th=[ 594], 90.00th=[ 611], 95.00th=[ 627], 00:29:15.842 | 99.00th=[ 693], 99.50th=[ 791], 99.90th=[ 873], 99.95th=[ 922], 00:29:15.842 | 99.99th=[ 1287] 00:29:15.842 bw ( KiB/s): min=26016, max=28544, per=99.87%, avg=27011.37, stdev=683.32, samples=19 00:29:15.842 iops : min= 6504, max= 7136, avg=6752.84, stdev=170.83, samples=19 00:29:15.842 lat (usec) : 500=11.25%, 750=88.06%, 1000=0.67% 00:29:15.842 lat (msec) : 2=0.01%, 4=0.01% 00:29:15.842 cpu : usr=85.22%, sys=12.63%, ctx=25, majf=0, minf=1063 00:29:15.842 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:15.842 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:15.842 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:15.842 issued rwts: total=67620,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:15.842 latency : target=0, window=0, percentile=100.00%, depth=4 00:29:15.842 00:29:15.842 Run status group 0 (all jobs): 00:29:15.842 READ: bw=26.4MiB/s (27.7MB/s), 26.4MiB/s-26.4MiB/s (27.7MB/s-27.7MB/s), io=264MiB (277MB), run=10001-10001msec 00:29:15.842 ----------------------------------------------------- 00:29:15.842 Suppressions used: 00:29:15.842 count bytes template 00:29:15.842 1 8 /usr/src/fio/parse.c 00:29:15.842 1 8 libtcmalloc_minimal.so 00:29:15.842 1 904 libcrypto.so 00:29:15.842 ----------------------------------------------------- 00:29:15.842 00:29:15.842 09:11:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:29:15.842 09:11:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:29:15.842 09:11:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:29:15.842 09:11:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:15.842 09:11:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:29:15.842 09:11:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:15.842 09:11:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.842 09:11:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:15.842 09:11:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.842 09:11:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:15.842 09:11:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.842 09:11:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:15.842 09:11:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.842 00:29:15.842 real 0m12.350s 00:29:15.842 user 0m10.392s 00:29:15.842 sys 0m1.664s 00:29:15.842 09:11:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:15.842 09:11:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:15.842 ************************************ 00:29:15.842 END TEST fio_dif_1_default 00:29:15.842 ************************************ 00:29:15.842 09:11:22 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:29:15.842 09:11:22 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:15.842 09:11:22 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:15.842 09:11:22 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:15.842 ************************************ 00:29:15.842 START TEST fio_dif_1_multi_subsystems 00:29:15.842 ************************************ 00:29:15.842 09:11:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:29:15.842 09:11:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:29:15.842 09:11:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:29:15.842 09:11:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:29:15.842 09:11:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:29:15.842 09:11:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:29:15.842 09:11:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:29:15.842 09:11:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:29:15.842 09:11:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.842 09:11:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:15.842 bdev_null0 00:29:15.842 09:11:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.842 09:11:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:15.842 09:11:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.842 09:11:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:15.842 09:11:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.842 09:11:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:15.842 09:11:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.842 09:11:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:15.842 09:11:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.842 09:11:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:15.842 09:11:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.842 09:11:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:15.842 [2024-07-25 09:11:22.666231] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:15.842 09:11:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.842 09:11:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:29:15.842 09:11:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:29:15.842 09:11:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:29:15.842 09:11:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:29:15.842 09:11:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.842 09:11:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:15.842 bdev_null1 00:29:15.842 09:11:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.842 09:11:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:29:15.842 09:11:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.842 09:11:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:15.842 09:11:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.842 09:11:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:29:15.842 09:11:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.842 09:11:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:15.842 09:11:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.842 09:11:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:15.842 09:11:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.842 09:11:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:15.842 09:11:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.842 09:11:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:29:15.842 09:11:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:29:15.842 09:11:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:29:15.842 09:11:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:29:15.842 09:11:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:15.842 09:11:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:29:15.842 09:11:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:15.842 09:11:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:15.842 09:11:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:15.842 { 00:29:15.842 "params": { 00:29:15.842 "name": "Nvme$subsystem", 00:29:15.842 "trtype": "$TEST_TRANSPORT", 00:29:15.842 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:15.842 "adrfam": "ipv4", 00:29:15.842 "trsvcid": "$NVMF_PORT", 00:29:15.842 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:15.842 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:15.842 "hdgst": ${hdgst:-false}, 00:29:15.842 "ddgst": ${ddgst:-false} 00:29:15.842 }, 00:29:15.842 "method": "bdev_nvme_attach_controller" 00:29:15.842 } 00:29:15.842 EOF 00:29:15.842 )") 00:29:15.843 09:11:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:15.843 09:11:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:29:15.843 09:11:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:15.843 09:11:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:29:15.843 09:11:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:15.843 09:11:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:15.843 09:11:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:29:15.843 09:11:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:29:15.843 09:11:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:15.843 09:11:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:15.843 09:11:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:29:15.843 09:11:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:29:15.843 09:11:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:15.843 09:11:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:29:15.843 09:11:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:29:15.843 09:11:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:15.843 09:11:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:29:15.843 09:11:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:15.843 09:11:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:15.843 { 00:29:15.843 "params": { 00:29:15.843 "name": "Nvme$subsystem", 00:29:15.843 "trtype": "$TEST_TRANSPORT", 00:29:15.843 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:15.843 "adrfam": "ipv4", 00:29:15.843 "trsvcid": "$NVMF_PORT", 00:29:15.843 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:15.843 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:15.843 "hdgst": ${hdgst:-false}, 00:29:15.843 "ddgst": ${ddgst:-false} 00:29:15.843 }, 00:29:15.843 "method": "bdev_nvme_attach_controller" 00:29:15.843 } 00:29:15.843 EOF 00:29:15.843 )") 00:29:15.843 09:11:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:29:15.843 09:11:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:29:15.843 09:11:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:29:15.843 09:11:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:29:15.843 09:11:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:29:15.843 09:11:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:15.843 "params": { 00:29:15.843 "name": "Nvme0", 00:29:15.843 "trtype": "tcp", 00:29:15.843 "traddr": "10.0.0.2", 00:29:15.843 "adrfam": "ipv4", 00:29:15.843 "trsvcid": "4420", 00:29:15.843 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:15.843 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:15.843 "hdgst": false, 00:29:15.843 "ddgst": false 00:29:15.843 }, 00:29:15.843 "method": "bdev_nvme_attach_controller" 00:29:15.843 },{ 00:29:15.843 "params": { 00:29:15.843 "name": "Nvme1", 00:29:15.843 "trtype": "tcp", 00:29:15.843 "traddr": "10.0.0.2", 00:29:15.843 "adrfam": "ipv4", 00:29:15.843 "trsvcid": "4420", 00:29:15.843 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:15.843 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:15.843 "hdgst": false, 00:29:15.843 "ddgst": false 00:29:15.843 }, 00:29:15.843 "method": "bdev_nvme_attach_controller" 00:29:15.843 }' 00:29:15.843 09:11:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:29:15.843 09:11:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:29:15.843 09:11:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # break 00:29:15.843 09:11:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:29:15.843 09:11:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:16.102 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:29:16.102 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:29:16.102 fio-3.35 00:29:16.102 Starting 2 threads 00:29:28.317 00:29:28.317 filename0: (groupid=0, jobs=1): err= 0: pid=89060: Thu Jul 25 09:11:33 2024 00:29:28.317 read: IOPS=3772, BW=14.7MiB/s (15.5MB/s)(147MiB/10001msec) 00:29:28.317 slat (nsec): min=6192, max=80969, avg=15760.45, stdev=5142.93 00:29:28.317 clat (usec): min=535, max=5418, avg=1016.49, stdev=96.41 00:29:28.317 lat (usec): min=544, max=5439, avg=1032.25, stdev=96.73 00:29:28.317 clat percentiles (usec): 00:29:28.317 | 1.00th=[ 898], 5.00th=[ 930], 10.00th=[ 947], 20.00th=[ 971], 00:29:28.317 | 30.00th=[ 988], 40.00th=[ 996], 50.00th=[ 1004], 60.00th=[ 1020], 00:29:28.317 | 70.00th=[ 1029], 80.00th=[ 1045], 90.00th=[ 1074], 95.00th=[ 1106], 00:29:28.317 | 99.00th=[ 1467], 99.50th=[ 1500], 99.90th=[ 1647], 99.95th=[ 1893], 00:29:28.317 | 99.99th=[ 5407] 00:29:28.317 bw ( KiB/s): min=12928, max=15584, per=50.01%, avg=15092.16, stdev=575.55, samples=19 00:29:28.317 iops : min= 3232, max= 3896, avg=3773.00, stdev=143.88, samples=19 00:29:28.317 lat (usec) : 750=0.05%, 1000=43.47% 00:29:28.317 lat (msec) : 2=56.43%, 4=0.03%, 10=0.01% 00:29:28.317 cpu : usr=90.39%, sys=7.99%, ctx=30, majf=0, minf=1074 00:29:28.317 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:28.317 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:28.317 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:28.317 issued rwts: total=37728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:28.317 latency : target=0, window=0, percentile=100.00%, depth=4 00:29:28.317 filename1: (groupid=0, jobs=1): err= 0: pid=89061: Thu Jul 25 09:11:33 2024 00:29:28.317 read: IOPS=3772, BW=14.7MiB/s (15.5MB/s)(147MiB/10001msec) 00:29:28.317 slat (usec): min=5, max=338, avg=15.88, stdev= 6.37 00:29:28.317 clat (usec): min=556, max=2591, avg=1016.30, stdev=94.77 00:29:28.317 lat (usec): min=566, max=2615, avg=1032.17, stdev=95.59 00:29:28.317 clat percentiles (usec): 00:29:28.317 | 1.00th=[ 857], 5.00th=[ 906], 10.00th=[ 930], 20.00th=[ 963], 00:29:28.318 | 30.00th=[ 979], 40.00th=[ 996], 50.00th=[ 1012], 60.00th=[ 1020], 00:29:28.318 | 70.00th=[ 1037], 80.00th=[ 1057], 90.00th=[ 1090], 95.00th=[ 1139], 00:29:28.318 | 99.00th=[ 1467], 99.50th=[ 1516], 99.90th=[ 1663], 99.95th=[ 1844], 00:29:28.318 | 99.99th=[ 2540] 00:29:28.318 bw ( KiB/s): min=12928, max=15584, per=50.01%, avg=15092.26, stdev=574.47, samples=19 00:29:28.318 iops : min= 3232, max= 3896, avg=3773.05, stdev=143.61, samples=19 00:29:28.318 lat (usec) : 750=0.11%, 1000=43.01% 00:29:28.318 lat (msec) : 2=56.85%, 4=0.03% 00:29:28.318 cpu : usr=90.05%, sys=7.96%, ctx=158, majf=0, minf=1074 00:29:28.318 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:28.318 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:28.318 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:28.318 issued rwts: total=37728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:28.318 latency : target=0, window=0, percentile=100.00%, depth=4 00:29:28.318 00:29:28.318 Run status group 0 (all jobs): 00:29:28.318 READ: bw=29.5MiB/s (30.9MB/s), 14.7MiB/s-14.7MiB/s (15.5MB/s-15.5MB/s), io=295MiB (309MB), run=10001-10001msec 00:29:28.318 ----------------------------------------------------- 00:29:28.318 Suppressions used: 00:29:28.318 count bytes template 00:29:28.318 2 16 /usr/src/fio/parse.c 00:29:28.318 1 8 libtcmalloc_minimal.so 00:29:28.318 1 904 libcrypto.so 00:29:28.318 ----------------------------------------------------- 00:29:28.318 00:29:28.318 09:11:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:29:28.318 09:11:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:29:28.318 09:11:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:29:28.318 09:11:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:28.318 09:11:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:29:28.318 09:11:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:28.318 09:11:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:28.318 09:11:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:28.318 09:11:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:28.318 09:11:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:28.318 09:11:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:28.318 09:11:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:28.318 09:11:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:28.318 09:11:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:29:28.318 09:11:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:29:28.318 09:11:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:29:28.318 09:11:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:28.318 09:11:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:28.318 09:11:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:28.318 09:11:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:28.318 09:11:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:29:28.318 09:11:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:28.318 09:11:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:28.318 09:11:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:28.318 00:29:28.318 real 0m12.566s 00:29:28.318 user 0m20.110s 00:29:28.318 sys 0m2.033s 00:29:28.318 09:11:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:28.318 09:11:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:28.318 ************************************ 00:29:28.318 END TEST fio_dif_1_multi_subsystems 00:29:28.318 ************************************ 00:29:28.318 09:11:35 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:29:28.318 09:11:35 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:28.318 09:11:35 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:28.318 09:11:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:28.318 ************************************ 00:29:28.318 START TEST fio_dif_rand_params 00:29:28.318 ************************************ 00:29:28.318 09:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:29:28.318 09:11:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:29:28.318 09:11:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:29:28.318 09:11:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:29:28.318 09:11:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:29:28.318 09:11:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:29:28.318 09:11:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:29:28.318 09:11:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:29:28.318 09:11:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:29:28.318 09:11:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:29:28.318 09:11:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:28.318 09:11:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:29:28.318 09:11:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:29:28.318 09:11:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:29:28.318 09:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:28.318 09:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:28.318 bdev_null0 00:29:28.318 09:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:28.318 09:11:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:28.318 09:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:28.318 09:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:28.318 09:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:28.318 09:11:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:28.318 09:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:28.318 09:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:28.318 09:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:28.318 09:11:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:28.318 09:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:28.318 09:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:28.318 [2024-07-25 09:11:35.287285] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:28.318 09:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:28.318 09:11:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:29:28.318 09:11:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:29:28.318 09:11:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:29:28.318 09:11:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:29:28.318 09:11:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:29:28.318 09:11:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:28.318 09:11:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:28.318 09:11:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:28.318 { 00:29:28.318 "params": { 00:29:28.318 "name": "Nvme$subsystem", 00:29:28.318 "trtype": "$TEST_TRANSPORT", 00:29:28.318 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:28.318 "adrfam": "ipv4", 00:29:28.318 "trsvcid": "$NVMF_PORT", 00:29:28.318 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:28.318 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:28.318 "hdgst": ${hdgst:-false}, 00:29:28.318 "ddgst": ${ddgst:-false} 00:29:28.318 }, 00:29:28.318 "method": "bdev_nvme_attach_controller" 00:29:28.318 } 00:29:28.318 EOF 00:29:28.318 )") 00:29:28.318 09:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:28.318 09:11:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:29:28.318 09:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:28.318 09:11:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:29:28.318 09:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:28.318 09:11:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:29:28.318 09:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:28.318 09:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:28.318 09:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:29:28.318 09:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:28.319 09:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:28.319 09:11:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:28.319 09:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:29:28.319 09:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:28.319 09:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:28.319 09:11:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:29:28.319 09:11:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:28.319 09:11:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:29:28.319 09:11:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:29:28.319 09:11:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:28.319 "params": { 00:29:28.319 "name": "Nvme0", 00:29:28.319 "trtype": "tcp", 00:29:28.319 "traddr": "10.0.0.2", 00:29:28.319 "adrfam": "ipv4", 00:29:28.319 "trsvcid": "4420", 00:29:28.319 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:28.319 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:28.319 "hdgst": false, 00:29:28.319 "ddgst": false 00:29:28.319 }, 00:29:28.319 "method": "bdev_nvme_attach_controller" 00:29:28.319 }' 00:29:28.319 09:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:29:28.319 09:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:29:28.319 09:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # break 00:29:28.319 09:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:29:28.319 09:11:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:28.640 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:29:28.640 ... 00:29:28.640 fio-3.35 00:29:28.640 Starting 3 threads 00:29:35.213 00:29:35.213 filename0: (groupid=0, jobs=1): err= 0: pid=89225: Thu Jul 25 09:11:41 2024 00:29:35.213 read: IOPS=219, BW=27.4MiB/s (28.8MB/s)(137MiB/5001msec) 00:29:35.213 slat (nsec): min=6100, max=80616, avg=14316.20, stdev=7056.82 00:29:35.213 clat (usec): min=12547, max=15349, avg=13626.35, stdev=336.10 00:29:35.213 lat (usec): min=12555, max=15382, avg=13640.67, stdev=336.64 00:29:35.213 clat percentiles (usec): 00:29:35.213 | 1.00th=[12780], 5.00th=[13042], 10.00th=[13173], 20.00th=[13435], 00:29:35.213 | 30.00th=[13566], 40.00th=[13566], 50.00th=[13698], 60.00th=[13698], 00:29:35.213 | 70.00th=[13829], 80.00th=[13829], 90.00th=[13960], 95.00th=[14091], 00:29:35.213 | 99.00th=[14222], 99.50th=[14353], 99.90th=[15401], 99.95th=[15401], 00:29:35.213 | 99.99th=[15401] 00:29:35.213 bw ( KiB/s): min=27648, max=29184, per=33.41%, avg=28159.78, stdev=533.86, samples=9 00:29:35.213 iops : min= 216, max= 228, avg=219.89, stdev= 4.20, samples=9 00:29:35.213 lat (msec) : 20=100.00% 00:29:35.213 cpu : usr=91.42%, sys=7.68%, ctx=11, majf=0, minf=1074 00:29:35.213 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:35.213 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:35.213 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:35.213 issued rwts: total=1098,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:35.213 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:35.213 filename0: (groupid=0, jobs=1): err= 0: pid=89226: Thu Jul 25 09:11:41 2024 00:29:35.213 read: IOPS=219, BW=27.5MiB/s (28.8MB/s)(138MiB/5007msec) 00:29:35.213 slat (nsec): min=8363, max=63933, avg=14191.56, stdev=7080.71 00:29:35.213 clat (usec): min=7775, max=14307, avg=13604.92, stdev=444.40 00:29:35.213 lat (usec): min=7784, max=14328, avg=13619.11, stdev=445.02 00:29:35.213 clat percentiles (usec): 00:29:35.213 | 1.00th=[12649], 5.00th=[13042], 10.00th=[13042], 20.00th=[13304], 00:29:35.213 | 30.00th=[13566], 40.00th=[13566], 50.00th=[13698], 60.00th=[13698], 00:29:35.213 | 70.00th=[13829], 80.00th=[13829], 90.00th=[13960], 95.00th=[14091], 00:29:35.213 | 99.00th=[14222], 99.50th=[14222], 99.90th=[14353], 99.95th=[14353], 00:29:35.213 | 99.99th=[14353] 00:29:35.213 bw ( KiB/s): min=27592, max=29125, per=33.39%, avg=28147.22, stdev=536.30, samples=9 00:29:35.213 iops : min= 215, max= 227, avg=219.78, stdev= 4.15, samples=9 00:29:35.213 lat (msec) : 10=0.27%, 20=99.73% 00:29:35.213 cpu : usr=91.61%, sys=7.49%, ctx=12, majf=0, minf=1062 00:29:35.213 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:35.213 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:35.213 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:35.213 issued rwts: total=1101,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:35.213 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:35.213 filename0: (groupid=0, jobs=1): err= 0: pid=89227: Thu Jul 25 09:11:41 2024 00:29:35.213 read: IOPS=219, BW=27.4MiB/s (28.8MB/s)(137MiB/5003msec) 00:29:35.213 slat (nsec): min=6342, max=55862, avg=14007.31, stdev=6224.25 00:29:35.213 clat (usec): min=10539, max=20160, avg=13631.73, stdev=500.92 00:29:35.213 lat (usec): min=10549, max=20182, avg=13645.74, stdev=501.25 00:29:35.213 clat percentiles (usec): 00:29:35.213 | 1.00th=[12649], 5.00th=[13042], 10.00th=[13173], 20.00th=[13304], 00:29:35.213 | 30.00th=[13435], 40.00th=[13698], 50.00th=[13698], 60.00th=[13698], 00:29:35.213 | 70.00th=[13829], 80.00th=[13829], 90.00th=[13960], 95.00th=[14091], 00:29:35.213 | 99.00th=[14222], 99.50th=[14353], 99.90th=[20055], 99.95th=[20055], 00:29:35.213 | 99.99th=[20055] 00:29:35.213 bw ( KiB/s): min=26880, max=29242, per=33.32%, avg=28081.11, stdev=689.36, samples=9 00:29:35.213 iops : min= 210, max= 228, avg=219.33, stdev= 5.29, samples=9 00:29:35.213 lat (msec) : 20=99.73%, 50=0.27% 00:29:35.213 cpu : usr=91.04%, sys=8.04%, ctx=24, majf=0, minf=1072 00:29:35.213 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:35.213 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:35.213 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:35.213 issued rwts: total=1098,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:35.213 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:35.213 00:29:35.213 Run status group 0 (all jobs): 00:29:35.213 READ: bw=82.3MiB/s (86.3MB/s), 27.4MiB/s-27.5MiB/s (28.8MB/s-28.8MB/s), io=412MiB (432MB), run=5001-5007msec 00:29:35.472 ----------------------------------------------------- 00:29:35.472 Suppressions used: 00:29:35.472 count bytes template 00:29:35.472 5 44 /usr/src/fio/parse.c 00:29:35.472 1 8 libtcmalloc_minimal.so 00:29:35.472 1 904 libcrypto.so 00:29:35.472 ----------------------------------------------------- 00:29:35.472 00:29:35.472 09:11:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:29:35.472 09:11:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:29:35.472 09:11:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:35.472 09:11:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:35.472 09:11:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:29:35.472 09:11:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:35.472 09:11:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.472 09:11:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:35.472 09:11:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.472 09:11:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:35.472 09:11:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.472 09:11:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:35.472 09:11:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.472 09:11:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:29:35.472 09:11:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:29:35.472 09:11:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:29:35.472 09:11:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:29:35.472 09:11:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:29:35.472 09:11:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:29:35.472 09:11:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:29:35.472 09:11:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:29:35.472 09:11:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:35.472 09:11:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:29:35.472 09:11:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:29:35.472 09:11:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:29:35.472 09:11:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.472 09:11:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:35.472 bdev_null0 00:29:35.472 09:11:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.472 09:11:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:35.472 09:11:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.472 09:11:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:35.472 09:11:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.472 09:11:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:35.472 09:11:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.472 09:11:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:35.472 09:11:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.472 09:11:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:35.472 09:11:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.473 09:11:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:35.473 [2024-07-25 09:11:42.541468] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:35.473 09:11:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.473 09:11:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:35.473 09:11:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:29:35.473 09:11:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:29:35.473 09:11:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:29:35.473 09:11:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.473 09:11:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:35.473 bdev_null1 00:29:35.473 09:11:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.473 09:11:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:29:35.473 09:11:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.473 09:11:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:35.473 09:11:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.473 09:11:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:29:35.473 09:11:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.473 09:11:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:35.473 09:11:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.473 09:11:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:35.473 09:11:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.473 09:11:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:35.473 09:11:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.473 09:11:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:35.473 09:11:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:29:35.473 09:11:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:29:35.473 09:11:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:29:35.473 09:11:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.473 09:11:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:35.731 bdev_null2 00:29:35.731 09:11:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.731 09:11:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:29:35.731 09:11:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.731 09:11:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:35.731 09:11:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.731 09:11:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:29:35.731 09:11:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.731 09:11:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:35.731 09:11:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.731 09:11:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:35.732 09:11:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.732 09:11:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:35.732 09:11:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.732 09:11:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:29:35.732 09:11:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:29:35.732 09:11:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:29:35.732 09:11:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:29:35.732 09:11:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:29:35.732 09:11:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:29:35.732 09:11:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:35.732 09:11:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:35.732 09:11:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:29:35.732 09:11:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:35.732 09:11:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:29:35.732 09:11:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:35.732 09:11:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:35.732 09:11:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:35.732 { 00:29:35.732 "params": { 00:29:35.732 "name": "Nvme$subsystem", 00:29:35.732 "trtype": "$TEST_TRANSPORT", 00:29:35.732 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:35.732 "adrfam": "ipv4", 00:29:35.732 "trsvcid": "$NVMF_PORT", 00:29:35.732 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:35.732 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:35.732 "hdgst": ${hdgst:-false}, 00:29:35.732 "ddgst": ${ddgst:-false} 00:29:35.732 }, 00:29:35.732 "method": "bdev_nvme_attach_controller" 00:29:35.732 } 00:29:35.732 EOF 00:29:35.732 )") 00:29:35.732 09:11:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:35.732 09:11:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:35.732 09:11:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:29:35.732 09:11:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:35.732 09:11:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:35.732 09:11:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:35.732 09:11:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:29:35.732 09:11:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:35.732 09:11:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:29:35.732 09:11:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:35.732 09:11:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:29:35.732 09:11:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:35.732 09:11:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:35.732 09:11:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:29:35.732 09:11:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:35.732 { 00:29:35.732 "params": { 00:29:35.732 "name": "Nvme$subsystem", 00:29:35.732 "trtype": "$TEST_TRANSPORT", 00:29:35.732 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:35.732 "adrfam": "ipv4", 00:29:35.732 "trsvcid": "$NVMF_PORT", 00:29:35.732 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:35.732 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:35.732 "hdgst": ${hdgst:-false}, 00:29:35.732 "ddgst": ${ddgst:-false} 00:29:35.732 }, 00:29:35.732 "method": "bdev_nvme_attach_controller" 00:29:35.732 } 00:29:35.732 EOF 00:29:35.732 )") 00:29:35.732 09:11:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:35.732 09:11:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:29:35.732 09:11:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:35.732 09:11:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:29:35.732 09:11:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:35.732 09:11:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:35.732 09:11:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:35.732 { 00:29:35.732 "params": { 00:29:35.732 "name": "Nvme$subsystem", 00:29:35.732 "trtype": "$TEST_TRANSPORT", 00:29:35.732 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:35.732 "adrfam": "ipv4", 00:29:35.732 "trsvcid": "$NVMF_PORT", 00:29:35.732 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:35.732 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:35.732 "hdgst": ${hdgst:-false}, 00:29:35.732 "ddgst": ${ddgst:-false} 00:29:35.732 }, 00:29:35.732 "method": "bdev_nvme_attach_controller" 00:29:35.732 } 00:29:35.732 EOF 00:29:35.732 )") 00:29:35.732 09:11:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:35.732 09:11:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:29:35.732 09:11:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:29:35.732 09:11:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:35.732 "params": { 00:29:35.732 "name": "Nvme0", 00:29:35.732 "trtype": "tcp", 00:29:35.732 "traddr": "10.0.0.2", 00:29:35.732 "adrfam": "ipv4", 00:29:35.732 "trsvcid": "4420", 00:29:35.732 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:35.732 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:35.732 "hdgst": false, 00:29:35.732 "ddgst": false 00:29:35.732 }, 00:29:35.732 "method": "bdev_nvme_attach_controller" 00:29:35.732 },{ 00:29:35.732 "params": { 00:29:35.732 "name": "Nvme1", 00:29:35.732 "trtype": "tcp", 00:29:35.732 "traddr": "10.0.0.2", 00:29:35.732 "adrfam": "ipv4", 00:29:35.732 "trsvcid": "4420", 00:29:35.732 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:35.732 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:35.732 "hdgst": false, 00:29:35.732 "ddgst": false 00:29:35.732 }, 00:29:35.732 "method": "bdev_nvme_attach_controller" 00:29:35.732 },{ 00:29:35.732 "params": { 00:29:35.732 "name": "Nvme2", 00:29:35.732 "trtype": "tcp", 00:29:35.732 "traddr": "10.0.0.2", 00:29:35.732 "adrfam": "ipv4", 00:29:35.732 "trsvcid": "4420", 00:29:35.732 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:35.732 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:35.732 "hdgst": false, 00:29:35.732 "ddgst": false 00:29:35.732 }, 00:29:35.732 "method": "bdev_nvme_attach_controller" 00:29:35.732 }' 00:29:35.732 09:11:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:29:35.732 09:11:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:29:35.732 09:11:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # break 00:29:35.732 09:11:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:29:35.732 09:11:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:35.991 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:29:35.991 ... 00:29:35.991 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:29:35.991 ... 00:29:35.991 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:29:35.991 ... 00:29:35.991 fio-3.35 00:29:35.991 Starting 24 threads 00:29:48.199 00:29:48.199 filename0: (groupid=0, jobs=1): err= 0: pid=89326: Thu Jul 25 09:11:54 2024 00:29:48.199 read: IOPS=181, BW=726KiB/s (744kB/s)(7292KiB/10039msec) 00:29:48.199 slat (usec): min=4, max=9239, avg=54.44, stdev=459.41 00:29:48.199 clat (msec): min=38, max=157, avg=87.80, stdev=19.76 00:29:48.199 lat (msec): min=38, max=157, avg=87.86, stdev=19.76 00:29:48.199 clat percentiles (msec): 00:29:48.199 | 1.00th=[ 46], 5.00th=[ 59], 10.00th=[ 64], 20.00th=[ 69], 00:29:48.199 | 30.00th=[ 77], 40.00th=[ 85], 50.00th=[ 90], 60.00th=[ 94], 00:29:48.199 | 70.00th=[ 96], 80.00th=[ 104], 90.00th=[ 113], 95.00th=[ 121], 00:29:48.199 | 99.00th=[ 150], 99.50th=[ 150], 99.90th=[ 159], 99.95th=[ 159], 00:29:48.199 | 99.99th=[ 159] 00:29:48.199 bw ( KiB/s): min= 509, max= 824, per=4.21%, avg=722.30, stdev=75.08, samples=20 00:29:48.199 iops : min= 127, max= 206, avg=180.50, stdev=18.84, samples=20 00:29:48.199 lat (msec) : 50=1.97%, 100=74.00%, 250=24.03% 00:29:48.199 cpu : usr=41.77%, sys=2.07%, ctx=1256, majf=0, minf=1074 00:29:48.199 IO depths : 1=0.1%, 2=2.1%, 4=8.7%, 8=74.5%, 16=14.6%, 32=0.0%, >=64=0.0% 00:29:48.199 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.199 complete : 0=0.0%, 4=89.2%, 8=8.9%, 16=1.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.199 issued rwts: total=1823,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:48.199 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:48.199 filename0: (groupid=0, jobs=1): err= 0: pid=89327: Thu Jul 25 09:11:54 2024 00:29:48.199 read: IOPS=178, BW=715KiB/s (732kB/s)(7196KiB/10064msec) 00:29:48.199 slat (usec): min=5, max=8050, avg=36.27, stdev=356.39 00:29:48.199 clat (msec): min=5, max=165, avg=89.07, stdev=28.39 00:29:48.200 lat (msec): min=5, max=165, avg=89.11, stdev=28.39 00:29:48.200 clat percentiles (msec): 00:29:48.200 | 1.00th=[ 6], 5.00th=[ 11], 10.00th=[ 62], 20.00th=[ 71], 00:29:48.200 | 30.00th=[ 85], 40.00th=[ 89], 50.00th=[ 93], 60.00th=[ 96], 00:29:48.200 | 70.00th=[ 102], 80.00th=[ 109], 90.00th=[ 120], 95.00th=[ 129], 00:29:48.200 | 99.00th=[ 153], 99.50th=[ 161], 99.90th=[ 161], 99.95th=[ 165], 00:29:48.200 | 99.99th=[ 165] 00:29:48.200 bw ( KiB/s): min= 512, max= 1536, per=4.17%, avg=715.35, stdev=209.77, samples=20 00:29:48.200 iops : min= 128, max= 384, avg=178.75, stdev=52.45, samples=20 00:29:48.200 lat (msec) : 10=4.45%, 20=1.78%, 50=0.11%, 100=61.92%, 250=31.74% 00:29:48.200 cpu : usr=38.76%, sys=1.82%, ctx=1325, majf=0, minf=1075 00:29:48.200 IO depths : 1=0.3%, 2=4.1%, 4=15.3%, 8=66.6%, 16=13.7%, 32=0.0%, >=64=0.0% 00:29:48.200 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.200 complete : 0=0.0%, 4=91.5%, 8=5.1%, 16=3.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.200 issued rwts: total=1799,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:48.200 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:48.200 filename0: (groupid=0, jobs=1): err= 0: pid=89328: Thu Jul 25 09:11:54 2024 00:29:48.200 read: IOPS=177, BW=710KiB/s (727kB/s)(7144KiB/10067msec) 00:29:48.200 slat (usec): min=4, max=8093, avg=37.60, stdev=379.80 00:29:48.200 clat (msec): min=32, max=167, avg=89.78, stdev=20.47 00:29:48.200 lat (msec): min=32, max=167, avg=89.82, stdev=20.48 00:29:48.200 clat percentiles (msec): 00:29:48.200 | 1.00th=[ 39], 5.00th=[ 59], 10.00th=[ 63], 20.00th=[ 72], 00:29:48.200 | 30.00th=[ 83], 40.00th=[ 86], 50.00th=[ 93], 60.00th=[ 96], 00:29:48.200 | 70.00th=[ 97], 80.00th=[ 107], 90.00th=[ 116], 95.00th=[ 126], 00:29:48.200 | 99.00th=[ 138], 99.50th=[ 155], 99.90th=[ 163], 99.95th=[ 169], 00:29:48.200 | 99.99th=[ 169] 00:29:48.200 bw ( KiB/s): min= 512, max= 816, per=4.12%, avg=707.90, stdev=70.28, samples=20 00:29:48.200 iops : min= 128, max= 204, avg=176.95, stdev=17.55, samples=20 00:29:48.200 lat (msec) : 50=3.92%, 100=72.34%, 250=23.74% 00:29:48.200 cpu : usr=32.34%, sys=1.81%, ctx=901, majf=0, minf=1074 00:29:48.200 IO depths : 1=0.1%, 2=2.0%, 4=7.8%, 8=74.9%, 16=15.3%, 32=0.0%, >=64=0.0% 00:29:48.200 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.200 complete : 0=0.0%, 4=89.5%, 8=8.8%, 16=1.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.200 issued rwts: total=1786,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:48.200 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:48.200 filename0: (groupid=0, jobs=1): err= 0: pid=89329: Thu Jul 25 09:11:54 2024 00:29:48.200 read: IOPS=173, BW=695KiB/s (712kB/s)(6980KiB/10044msec) 00:29:48.200 slat (nsec): min=5651, max=69643, avg=18625.66, stdev=8364.80 00:29:48.200 clat (msec): min=45, max=152, avg=91.86, stdev=19.72 00:29:48.200 lat (msec): min=45, max=153, avg=91.87, stdev=19.72 00:29:48.200 clat percentiles (msec): 00:29:48.200 | 1.00th=[ 53], 5.00th=[ 61], 10.00th=[ 64], 20.00th=[ 72], 00:29:48.200 | 30.00th=[ 84], 40.00th=[ 86], 50.00th=[ 95], 60.00th=[ 96], 00:29:48.200 | 70.00th=[ 101], 80.00th=[ 108], 90.00th=[ 120], 95.00th=[ 130], 00:29:48.200 | 99.00th=[ 142], 99.50th=[ 142], 99.90th=[ 153], 99.95th=[ 153], 00:29:48.200 | 99.99th=[ 153] 00:29:48.200 bw ( KiB/s): min= 528, max= 798, per=4.04%, avg=693.10, stdev=78.21, samples=20 00:29:48.200 iops : min= 132, max= 199, avg=173.25, stdev=19.52, samples=20 00:29:48.200 lat (msec) : 50=0.74%, 100=69.11%, 250=30.14% 00:29:48.200 cpu : usr=31.65%, sys=1.53%, ctx=1064, majf=0, minf=1073 00:29:48.200 IO depths : 1=0.1%, 2=3.0%, 4=11.9%, 8=70.9%, 16=14.2%, 32=0.0%, >=64=0.0% 00:29:48.200 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.200 complete : 0=0.0%, 4=90.3%, 8=7.1%, 16=2.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.200 issued rwts: total=1745,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:48.200 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:48.200 filename0: (groupid=0, jobs=1): err= 0: pid=89330: Thu Jul 25 09:11:54 2024 00:29:48.200 read: IOPS=174, BW=697KiB/s (713kB/s)(6984KiB/10026msec) 00:29:48.200 slat (usec): min=5, max=8088, avg=42.37, stdev=429.86 00:29:48.200 clat (msec): min=36, max=175, avg=91.60, stdev=19.28 00:29:48.200 lat (msec): min=36, max=175, avg=91.64, stdev=19.30 00:29:48.200 clat percentiles (msec): 00:29:48.200 | 1.00th=[ 52], 5.00th=[ 61], 10.00th=[ 68], 20.00th=[ 72], 00:29:48.200 | 30.00th=[ 85], 40.00th=[ 88], 50.00th=[ 94], 60.00th=[ 96], 00:29:48.200 | 70.00th=[ 97], 80.00th=[ 108], 90.00th=[ 118], 95.00th=[ 122], 00:29:48.200 | 99.00th=[ 144], 99.50th=[ 157], 99.90th=[ 176], 99.95th=[ 176], 00:29:48.200 | 99.99th=[ 176] 00:29:48.200 bw ( KiB/s): min= 576, max= 768, per=4.01%, avg=687.89, stdev=60.96, samples=19 00:29:48.200 iops : min= 144, max= 192, avg=171.95, stdev=15.26, samples=19 00:29:48.200 lat (msec) : 50=0.92%, 100=72.97%, 250=26.12% 00:29:48.200 cpu : usr=31.67%, sys=1.34%, ctx=889, majf=0, minf=1072 00:29:48.200 IO depths : 1=0.1%, 2=3.6%, 4=14.1%, 8=68.4%, 16=13.9%, 32=0.0%, >=64=0.0% 00:29:48.200 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.200 complete : 0=0.0%, 4=91.0%, 8=5.9%, 16=3.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.200 issued rwts: total=1746,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:48.200 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:48.200 filename0: (groupid=0, jobs=1): err= 0: pid=89331: Thu Jul 25 09:11:54 2024 00:29:48.200 read: IOPS=168, BW=675KiB/s (691kB/s)(6752KiB/10010msec) 00:29:48.200 slat (usec): min=4, max=8097, avg=36.21, stdev=352.50 00:29:48.200 clat (msec): min=13, max=186, avg=94.60, stdev=21.45 00:29:48.200 lat (msec): min=13, max=186, avg=94.64, stdev=21.45 00:29:48.200 clat percentiles (msec): 00:29:48.200 | 1.00th=[ 40], 5.00th=[ 64], 10.00th=[ 69], 20.00th=[ 79], 00:29:48.200 | 30.00th=[ 86], 40.00th=[ 90], 50.00th=[ 95], 60.00th=[ 99], 00:29:48.200 | 70.00th=[ 103], 80.00th=[ 107], 90.00th=[ 124], 95.00th=[ 133], 00:29:48.200 | 99.00th=[ 150], 99.50th=[ 150], 99.90th=[ 186], 99.95th=[ 186], 00:29:48.200 | 99.99th=[ 186] 00:29:48.200 bw ( KiB/s): min= 512, max= 824, per=3.85%, avg=660.84, stdev=92.53, samples=19 00:29:48.200 iops : min= 128, max= 206, avg=165.16, stdev=23.14, samples=19 00:29:48.200 lat (msec) : 20=0.59%, 50=1.07%, 100=62.38%, 250=35.96% 00:29:48.200 cpu : usr=38.20%, sys=1.97%, ctx=1310, majf=0, minf=1075 00:29:48.200 IO depths : 1=0.1%, 2=4.4%, 4=17.7%, 8=64.4%, 16=13.4%, 32=0.0%, >=64=0.0% 00:29:48.200 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.200 complete : 0=0.0%, 4=92.1%, 8=4.0%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.200 issued rwts: total=1688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:48.200 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:48.200 filename0: (groupid=0, jobs=1): err= 0: pid=89332: Thu Jul 25 09:11:54 2024 00:29:48.200 read: IOPS=185, BW=743KiB/s (761kB/s)(7488KiB/10077msec) 00:29:48.200 slat (usec): min=5, max=8033, avg=22.29, stdev=185.44 00:29:48.200 clat (msec): min=4, max=158, avg=85.81, stdev=27.49 00:29:48.200 lat (msec): min=4, max=158, avg=85.83, stdev=27.49 00:29:48.200 clat percentiles (msec): 00:29:48.200 | 1.00th=[ 6], 5.00th=[ 9], 10.00th=[ 60], 20.00th=[ 70], 00:29:48.200 | 30.00th=[ 81], 40.00th=[ 88], 50.00th=[ 91], 60.00th=[ 95], 00:29:48.200 | 70.00th=[ 97], 80.00th=[ 106], 90.00th=[ 113], 95.00th=[ 124], 00:29:48.200 | 99.00th=[ 136], 99.50th=[ 142], 99.90th=[ 153], 99.95th=[ 159], 00:29:48.200 | 99.99th=[ 159] 00:29:48.200 bw ( KiB/s): min= 528, max= 1539, per=4.33%, avg=742.75, stdev=197.90, samples=20 00:29:48.200 iops : min= 132, max= 384, avg=185.60, stdev=49.33, samples=20 00:29:48.200 lat (msec) : 10=5.13%, 20=0.85%, 50=2.24%, 100=65.49%, 250=26.28% 00:29:48.200 cpu : usr=40.32%, sys=1.86%, ctx=1307, majf=0, minf=1073 00:29:48.200 IO depths : 1=0.2%, 2=2.3%, 4=8.7%, 8=73.7%, 16=15.1%, 32=0.0%, >=64=0.0% 00:29:48.200 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.200 complete : 0=0.0%, 4=89.8%, 8=8.3%, 16=1.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.200 issued rwts: total=1872,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:48.200 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:48.200 filename0: (groupid=0, jobs=1): err= 0: pid=89333: Thu Jul 25 09:11:54 2024 00:29:48.200 read: IOPS=185, BW=740KiB/s (758kB/s)(7444KiB/10054msec) 00:29:48.200 slat (usec): min=5, max=8051, avg=35.32, stdev=357.27 00:29:48.200 clat (msec): min=33, max=164, avg=86.06, stdev=20.39 00:29:48.200 lat (msec): min=33, max=164, avg=86.10, stdev=20.40 00:29:48.200 clat percentiles (msec): 00:29:48.200 | 1.00th=[ 36], 5.00th=[ 59], 10.00th=[ 61], 20.00th=[ 69], 00:29:48.200 | 30.00th=[ 72], 40.00th=[ 84], 50.00th=[ 86], 60.00th=[ 95], 00:29:48.200 | 70.00th=[ 96], 80.00th=[ 102], 90.00th=[ 109], 95.00th=[ 121], 00:29:48.200 | 99.00th=[ 144], 99.50th=[ 148], 99.90th=[ 150], 99.95th=[ 165], 00:29:48.200 | 99.99th=[ 165] 00:29:48.200 bw ( KiB/s): min= 640, max= 888, per=4.32%, avg=740.80, stdev=72.65, samples=20 00:29:48.200 iops : min= 160, max= 222, avg=185.20, stdev=18.16, samples=20 00:29:48.201 lat (msec) : 50=3.98%, 100=75.17%, 250=20.85% 00:29:48.201 cpu : usr=31.79%, sys=1.42%, ctx=1105, majf=0, minf=1073 00:29:48.201 IO depths : 1=0.1%, 2=1.0%, 4=3.8%, 8=79.5%, 16=15.7%, 32=0.0%, >=64=0.0% 00:29:48.201 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.201 complete : 0=0.0%, 4=88.2%, 8=11.0%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.201 issued rwts: total=1861,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:48.201 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:48.201 filename1: (groupid=0, jobs=1): err= 0: pid=89334: Thu Jul 25 09:11:54 2024 00:29:48.201 read: IOPS=164, BW=657KiB/s (672kB/s)(6596KiB/10045msec) 00:29:48.201 slat (usec): min=5, max=4082, avg=27.37, stdev=198.70 00:29:48.201 clat (msec): min=46, max=155, avg=97.16, stdev=18.49 00:29:48.201 lat (msec): min=46, max=156, avg=97.19, stdev=18.49 00:29:48.201 clat percentiles (msec): 00:29:48.201 | 1.00th=[ 61], 5.00th=[ 67], 10.00th=[ 78], 20.00th=[ 85], 00:29:48.201 | 30.00th=[ 88], 40.00th=[ 93], 50.00th=[ 95], 60.00th=[ 99], 00:29:48.201 | 70.00th=[ 104], 80.00th=[ 108], 90.00th=[ 121], 95.00th=[ 136], 00:29:48.201 | 99.00th=[ 148], 99.50th=[ 157], 99.90th=[ 157], 99.95th=[ 157], 00:29:48.201 | 99.99th=[ 157] 00:29:48.201 bw ( KiB/s): min= 528, max= 790, per=3.82%, avg=654.30, stdev=75.78, samples=20 00:29:48.201 iops : min= 132, max= 197, avg=163.55, stdev=18.90, samples=20 00:29:48.201 lat (msec) : 50=0.55%, 100=61.13%, 250=38.33% 00:29:48.201 cpu : usr=40.42%, sys=2.00%, ctx=1213, majf=0, minf=1074 00:29:48.201 IO depths : 1=0.1%, 2=5.0%, 4=19.8%, 8=61.9%, 16=13.1%, 32=0.0%, >=64=0.0% 00:29:48.201 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.201 complete : 0=0.0%, 4=92.8%, 8=2.8%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.201 issued rwts: total=1649,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:48.201 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:48.201 filename1: (groupid=0, jobs=1): err= 0: pid=89335: Thu Jul 25 09:11:54 2024 00:29:48.201 read: IOPS=182, BW=731KiB/s (749kB/s)(7360KiB/10065msec) 00:29:48.201 slat (usec): min=5, max=8070, avg=33.54, stdev=337.33 00:29:48.201 clat (msec): min=32, max=170, avg=87.24, stdev=21.03 00:29:48.201 lat (msec): min=32, max=170, avg=87.27, stdev=21.03 00:29:48.201 clat percentiles (msec): 00:29:48.201 | 1.00th=[ 37], 5.00th=[ 59], 10.00th=[ 61], 20.00th=[ 69], 00:29:48.201 | 30.00th=[ 72], 40.00th=[ 85], 50.00th=[ 88], 60.00th=[ 96], 00:29:48.201 | 70.00th=[ 96], 80.00th=[ 105], 90.00th=[ 110], 95.00th=[ 121], 00:29:48.201 | 99.00th=[ 146], 99.50th=[ 153], 99.90th=[ 171], 99.95th=[ 171], 00:29:48.201 | 99.99th=[ 171] 00:29:48.201 bw ( KiB/s): min= 542, max= 872, per=4.25%, avg=729.50, stdev=67.85, samples=20 00:29:48.201 iops : min= 135, max= 218, avg=182.35, stdev=17.04, samples=20 00:29:48.201 lat (msec) : 50=3.59%, 100=73.59%, 250=22.83% 00:29:48.201 cpu : usr=31.64%, sys=1.52%, ctx=908, majf=0, minf=1072 00:29:48.201 IO depths : 1=0.1%, 2=1.0%, 4=4.0%, 8=79.3%, 16=15.6%, 32=0.0%, >=64=0.0% 00:29:48.201 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.201 complete : 0=0.0%, 4=88.2%, 8=10.9%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.201 issued rwts: total=1840,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:48.201 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:48.201 filename1: (groupid=0, jobs=1): err= 0: pid=89336: Thu Jul 25 09:11:54 2024 00:29:48.201 read: IOPS=192, BW=769KiB/s (787kB/s)(7744KiB/10070msec) 00:29:48.201 slat (usec): min=5, max=8039, avg=28.90, stdev=234.79 00:29:48.201 clat (msec): min=18, max=164, avg=82.93, stdev=21.83 00:29:48.201 lat (msec): min=18, max=164, avg=82.96, stdev=21.83 00:29:48.201 clat percentiles (msec): 00:29:48.201 | 1.00th=[ 23], 5.00th=[ 43], 10.00th=[ 61], 20.00th=[ 65], 00:29:48.201 | 30.00th=[ 71], 40.00th=[ 79], 50.00th=[ 86], 60.00th=[ 91], 00:29:48.201 | 70.00th=[ 95], 80.00th=[ 100], 90.00th=[ 108], 95.00th=[ 120], 00:29:48.201 | 99.00th=[ 131], 99.50th=[ 138], 99.90th=[ 165], 99.95th=[ 165], 00:29:48.201 | 99.99th=[ 165] 00:29:48.201 bw ( KiB/s): min= 638, max= 928, per=4.47%, avg=767.90, stdev=73.51, samples=20 00:29:48.201 iops : min= 159, max= 232, avg=191.95, stdev=18.42, samples=20 00:29:48.201 lat (msec) : 20=0.88%, 50=5.58%, 100=74.12%, 250=19.42% 00:29:48.201 cpu : usr=41.75%, sys=2.39%, ctx=1476, majf=0, minf=1075 00:29:48.201 IO depths : 1=0.1%, 2=0.7%, 4=2.8%, 8=80.9%, 16=15.5%, 32=0.0%, >=64=0.0% 00:29:48.201 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.201 complete : 0=0.0%, 4=87.6%, 8=11.8%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.201 issued rwts: total=1936,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:48.201 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:48.201 filename1: (groupid=0, jobs=1): err= 0: pid=89337: Thu Jul 25 09:11:54 2024 00:29:48.201 read: IOPS=179, BW=717KiB/s (734kB/s)(7176KiB/10009msec) 00:29:48.201 slat (usec): min=5, max=8065, avg=37.93, stdev=352.39 00:29:48.201 clat (msec): min=13, max=183, avg=89.08, stdev=20.38 00:29:48.201 lat (msec): min=13, max=183, avg=89.11, stdev=20.39 00:29:48.201 clat percentiles (msec): 00:29:48.201 | 1.00th=[ 39], 5.00th=[ 61], 10.00th=[ 64], 20.00th=[ 70], 00:29:48.201 | 30.00th=[ 81], 40.00th=[ 88], 50.00th=[ 91], 60.00th=[ 95], 00:29:48.201 | 70.00th=[ 99], 80.00th=[ 104], 90.00th=[ 116], 95.00th=[ 122], 00:29:48.201 | 99.00th=[ 148], 99.50th=[ 153], 99.90th=[ 184], 99.95th=[ 184], 00:29:48.201 | 99.99th=[ 184] 00:29:48.201 bw ( KiB/s): min= 592, max= 768, per=4.11%, avg=705.05, stdev=64.66, samples=19 00:29:48.201 iops : min= 148, max= 192, avg=176.21, stdev=16.22, samples=19 00:29:48.201 lat (msec) : 20=0.72%, 50=1.06%, 100=72.19%, 250=26.03% 00:29:48.201 cpu : usr=37.03%, sys=1.70%, ctx=1206, majf=0, minf=1072 00:29:48.201 IO depths : 1=0.1%, 2=3.0%, 4=11.8%, 8=71.1%, 16=14.2%, 32=0.0%, >=64=0.0% 00:29:48.201 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.201 complete : 0=0.0%, 4=90.2%, 8=7.2%, 16=2.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.201 issued rwts: total=1794,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:48.201 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:48.201 filename1: (groupid=0, jobs=1): err= 0: pid=89338: Thu Jul 25 09:11:54 2024 00:29:48.201 read: IOPS=178, BW=713KiB/s (730kB/s)(7148KiB/10024msec) 00:29:48.201 slat (usec): min=5, max=8068, avg=32.04, stdev=269.35 00:29:48.201 clat (msec): min=36, max=173, avg=89.57, stdev=20.77 00:29:48.201 lat (msec): min=36, max=173, avg=89.60, stdev=20.76 00:29:48.201 clat percentiles (msec): 00:29:48.201 | 1.00th=[ 54], 5.00th=[ 61], 10.00th=[ 64], 20.00th=[ 70], 00:29:48.201 | 30.00th=[ 80], 40.00th=[ 85], 50.00th=[ 91], 60.00th=[ 95], 00:29:48.201 | 70.00th=[ 97], 80.00th=[ 104], 90.00th=[ 113], 95.00th=[ 127], 00:29:48.201 | 99.00th=[ 153], 99.50th=[ 153], 99.90th=[ 174], 99.95th=[ 174], 00:29:48.201 | 99.99th=[ 174] 00:29:48.201 bw ( KiB/s): min= 512, max= 792, per=4.11%, avg=705.32, stdev=80.99, samples=19 00:29:48.201 iops : min= 128, max= 198, avg=176.32, stdev=20.26, samples=19 00:29:48.201 lat (msec) : 50=0.84%, 100=75.83%, 250=23.34% 00:29:48.201 cpu : usr=36.70%, sys=1.78%, ctx=1216, majf=0, minf=1075 00:29:48.201 IO depths : 1=0.1%, 2=2.7%, 4=10.7%, 8=72.3%, 16=14.3%, 32=0.0%, >=64=0.0% 00:29:48.201 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.201 complete : 0=0.0%, 4=89.9%, 8=7.8%, 16=2.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.201 issued rwts: total=1787,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:48.201 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:48.201 filename1: (groupid=0, jobs=1): err= 0: pid=89339: Thu Jul 25 09:11:54 2024 00:29:48.201 read: IOPS=175, BW=703KiB/s (720kB/s)(7036KiB/10013msec) 00:29:48.201 slat (usec): min=5, max=8053, avg=37.21, stdev=371.10 00:29:48.201 clat (msec): min=35, max=168, avg=90.82, stdev=19.42 00:29:48.201 lat (msec): min=35, max=168, avg=90.86, stdev=19.41 00:29:48.201 clat percentiles (msec): 00:29:48.201 | 1.00th=[ 50], 5.00th=[ 61], 10.00th=[ 65], 20.00th=[ 72], 00:29:48.201 | 30.00th=[ 85], 40.00th=[ 86], 50.00th=[ 94], 60.00th=[ 96], 00:29:48.201 | 70.00th=[ 97], 80.00th=[ 106], 90.00th=[ 116], 95.00th=[ 125], 00:29:48.201 | 99.00th=[ 144], 99.50th=[ 150], 99.90th=[ 169], 99.95th=[ 169], 00:29:48.201 | 99.99th=[ 169] 00:29:48.201 bw ( KiB/s): min= 576, max= 768, per=4.04%, avg=693.05, stdev=64.06, samples=19 00:29:48.201 iops : min= 144, max= 192, avg=173.21, stdev=16.06, samples=19 00:29:48.201 lat (msec) : 50=1.14%, 100=73.17%, 250=25.70% 00:29:48.201 cpu : usr=34.08%, sys=1.72%, ctx=958, majf=0, minf=1075 00:29:48.201 IO depths : 1=0.1%, 2=3.2%, 4=12.9%, 8=69.8%, 16=14.0%, 32=0.0%, >=64=0.0% 00:29:48.201 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.201 complete : 0=0.0%, 4=90.6%, 8=6.6%, 16=2.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.201 issued rwts: total=1759,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:48.201 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:48.201 filename1: (groupid=0, jobs=1): err= 0: pid=89340: Thu Jul 25 09:11:54 2024 00:29:48.201 read: IOPS=174, BW=699KiB/s (716kB/s)(7024KiB/10046msec) 00:29:48.201 slat (usec): min=5, max=8041, avg=32.78, stdev=331.39 00:29:48.201 clat (msec): min=43, max=173, avg=91.19, stdev=21.84 00:29:48.201 lat (msec): min=43, max=173, avg=91.23, stdev=21.85 00:29:48.201 clat percentiles (msec): 00:29:48.201 | 1.00th=[ 53], 5.00th=[ 61], 10.00th=[ 63], 20.00th=[ 71], 00:29:48.201 | 30.00th=[ 81], 40.00th=[ 85], 50.00th=[ 93], 60.00th=[ 96], 00:29:48.201 | 70.00th=[ 101], 80.00th=[ 108], 90.00th=[ 120], 95.00th=[ 128], 00:29:48.201 | 99.00th=[ 157], 99.50th=[ 165], 99.90th=[ 174], 99.95th=[ 174], 00:29:48.201 | 99.99th=[ 174] 00:29:48.201 bw ( KiB/s): min= 512, max= 793, per=4.07%, avg=697.25, stdev=88.39, samples=20 00:29:48.202 iops : min= 128, max= 198, avg=174.30, stdev=22.08, samples=20 00:29:48.202 lat (msec) : 50=0.74%, 100=68.74%, 250=30.52% 00:29:48.202 cpu : usr=35.28%, sys=1.76%, ctx=1014, majf=0, minf=1074 00:29:48.202 IO depths : 1=0.1%, 2=2.7%, 4=10.6%, 8=72.3%, 16=14.3%, 32=0.0%, >=64=0.0% 00:29:48.202 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.202 complete : 0=0.0%, 4=89.8%, 8=7.8%, 16=2.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.202 issued rwts: total=1756,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:48.202 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:48.202 filename1: (groupid=0, jobs=1): err= 0: pid=89341: Thu Jul 25 09:11:54 2024 00:29:48.202 read: IOPS=185, BW=743KiB/s (761kB/s)(7488KiB/10074msec) 00:29:48.202 slat (usec): min=4, max=6031, avg=25.87, stdev=191.40 00:29:48.202 clat (msec): min=4, max=153, avg=85.72, stdev=27.93 00:29:48.202 lat (msec): min=4, max=153, avg=85.75, stdev=27.94 00:29:48.202 clat percentiles (msec): 00:29:48.202 | 1.00th=[ 6], 5.00th=[ 9], 10.00th=[ 61], 20.00th=[ 69], 00:29:48.202 | 30.00th=[ 79], 40.00th=[ 88], 50.00th=[ 92], 60.00th=[ 96], 00:29:48.202 | 70.00th=[ 99], 80.00th=[ 107], 90.00th=[ 112], 95.00th=[ 124], 00:29:48.202 | 99.00th=[ 140], 99.50th=[ 140], 99.90th=[ 155], 99.95th=[ 155], 00:29:48.202 | 99.99th=[ 155] 00:29:48.202 bw ( KiB/s): min= 622, max= 1648, per=4.33%, avg=742.20, stdev=218.86, samples=20 00:29:48.202 iops : min= 155, max= 412, avg=185.50, stdev=54.73, samples=20 00:29:48.202 lat (msec) : 10=5.77%, 20=0.11%, 50=2.35%, 100=63.84%, 250=27.94% 00:29:48.202 cpu : usr=39.82%, sys=2.35%, ctx=1275, majf=0, minf=1073 00:29:48.202 IO depths : 1=0.2%, 2=2.9%, 4=10.8%, 8=71.5%, 16=14.5%, 32=0.0%, >=64=0.0% 00:29:48.202 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.202 complete : 0=0.0%, 4=90.3%, 8=7.4%, 16=2.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.202 issued rwts: total=1872,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:48.202 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:48.202 filename2: (groupid=0, jobs=1): err= 0: pid=89342: Thu Jul 25 09:11:54 2024 00:29:48.202 read: IOPS=190, BW=761KiB/s (779kB/s)(7652KiB/10053msec) 00:29:48.202 slat (usec): min=7, max=8068, avg=26.38, stdev=206.06 00:29:48.202 clat (msec): min=23, max=150, avg=83.81, stdev=20.60 00:29:48.202 lat (msec): min=23, max=150, avg=83.84, stdev=20.60 00:29:48.202 clat percentiles (msec): 00:29:48.202 | 1.00th=[ 32], 5.00th=[ 55], 10.00th=[ 61], 20.00th=[ 65], 00:29:48.202 | 30.00th=[ 71], 40.00th=[ 80], 50.00th=[ 86], 60.00th=[ 92], 00:29:48.202 | 70.00th=[ 96], 80.00th=[ 101], 90.00th=[ 107], 95.00th=[ 116], 00:29:48.202 | 99.00th=[ 132], 99.50th=[ 144], 99.90th=[ 150], 99.95th=[ 150], 00:29:48.202 | 99.99th=[ 150] 00:29:48.202 bw ( KiB/s): min= 661, max= 888, per=4.44%, avg=761.45, stdev=57.94, samples=20 00:29:48.202 iops : min= 165, max= 222, avg=190.35, stdev=14.51, samples=20 00:29:48.202 lat (msec) : 50=4.76%, 100=75.33%, 250=19.92% 00:29:48.202 cpu : usr=41.31%, sys=2.16%, ctx=1197, majf=0, minf=1074 00:29:48.202 IO depths : 1=0.1%, 2=0.6%, 4=2.5%, 8=81.2%, 16=15.6%, 32=0.0%, >=64=0.0% 00:29:48.202 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.202 complete : 0=0.0%, 4=87.5%, 8=11.9%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.202 issued rwts: total=1913,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:48.202 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:48.202 filename2: (groupid=0, jobs=1): err= 0: pid=89343: Thu Jul 25 09:11:54 2024 00:29:48.202 read: IOPS=193, BW=773KiB/s (791kB/s)(7736KiB/10009msec) 00:29:48.202 slat (usec): min=5, max=8045, avg=32.29, stdev=258.40 00:29:48.202 clat (msec): min=11, max=188, avg=82.65, stdev=23.29 00:29:48.202 lat (msec): min=11, max=188, avg=82.68, stdev=23.29 00:29:48.202 clat percentiles (msec): 00:29:48.202 | 1.00th=[ 30], 5.00th=[ 45], 10.00th=[ 58], 20.00th=[ 65], 00:29:48.202 | 30.00th=[ 69], 40.00th=[ 74], 50.00th=[ 85], 60.00th=[ 91], 00:29:48.202 | 70.00th=[ 95], 80.00th=[ 102], 90.00th=[ 110], 95.00th=[ 120], 00:29:48.202 | 99.00th=[ 153], 99.50th=[ 161], 99.90th=[ 188], 99.95th=[ 188], 00:29:48.202 | 99.99th=[ 188] 00:29:48.202 bw ( KiB/s): min= 552, max= 928, per=4.46%, avg=765.68, stdev=84.31, samples=19 00:29:48.202 iops : min= 138, max= 232, avg=191.37, stdev=21.03, samples=19 00:29:48.202 lat (msec) : 20=0.47%, 50=6.57%, 100=72.29%, 250=20.68% 00:29:48.202 cpu : usr=41.23%, sys=1.91%, ctx=1127, majf=0, minf=1075 00:29:48.202 IO depths : 1=0.1%, 2=0.2%, 4=0.8%, 8=83.3%, 16=15.7%, 32=0.0%, >=64=0.0% 00:29:48.202 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.202 complete : 0=0.0%, 4=86.9%, 8=13.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.202 issued rwts: total=1934,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:48.202 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:48.202 filename2: (groupid=0, jobs=1): err= 0: pid=89344: Thu Jul 25 09:11:54 2024 00:29:48.202 read: IOPS=189, BW=758KiB/s (776kB/s)(7624KiB/10059msec) 00:29:48.202 slat (usec): min=5, max=8026, avg=34.23, stdev=306.54 00:29:48.202 clat (msec): min=23, max=150, avg=84.12, stdev=20.69 00:29:48.202 lat (msec): min=23, max=150, avg=84.15, stdev=20.69 00:29:48.202 clat percentiles (msec): 00:29:48.202 | 1.00th=[ 35], 5.00th=[ 50], 10.00th=[ 61], 20.00th=[ 66], 00:29:48.202 | 30.00th=[ 71], 40.00th=[ 82], 50.00th=[ 86], 60.00th=[ 93], 00:29:48.202 | 70.00th=[ 96], 80.00th=[ 101], 90.00th=[ 108], 95.00th=[ 116], 00:29:48.202 | 99.00th=[ 130], 99.50th=[ 136], 99.90th=[ 150], 99.95th=[ 150], 00:29:48.202 | 99.99th=[ 150] 00:29:48.202 bw ( KiB/s): min= 680, max= 888, per=4.42%, avg=758.50, stdev=54.27, samples=20 00:29:48.202 iops : min= 170, max= 222, avg=189.60, stdev=13.59, samples=20 00:29:48.202 lat (msec) : 50=5.35%, 100=74.45%, 250=20.20% 00:29:48.202 cpu : usr=37.41%, sys=1.91%, ctx=1102, majf=0, minf=1074 00:29:48.202 IO depths : 1=0.1%, 2=0.4%, 4=1.6%, 8=82.1%, 16=15.8%, 32=0.0%, >=64=0.0% 00:29:48.202 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.202 complete : 0=0.0%, 4=87.4%, 8=12.3%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.202 issued rwts: total=1906,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:48.202 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:48.202 filename2: (groupid=0, jobs=1): err= 0: pid=89345: Thu Jul 25 09:11:54 2024 00:29:48.202 read: IOPS=170, BW=682KiB/s (698kB/s)(6828KiB/10013msec) 00:29:48.202 slat (usec): min=5, max=8059, avg=42.36, stdev=434.33 00:29:48.202 clat (msec): min=3, max=196, avg=93.59, stdev=25.35 00:29:48.202 lat (msec): min=3, max=196, avg=93.63, stdev=25.35 00:29:48.202 clat percentiles (msec): 00:29:48.202 | 1.00th=[ 6], 5.00th=[ 61], 10.00th=[ 70], 20.00th=[ 83], 00:29:48.202 | 30.00th=[ 85], 40.00th=[ 94], 50.00th=[ 96], 60.00th=[ 96], 00:29:48.202 | 70.00th=[ 105], 80.00th=[ 108], 90.00th=[ 121], 95.00th=[ 124], 00:29:48.202 | 99.00th=[ 174], 99.50th=[ 174], 99.90th=[ 197], 99.95th=[ 197], 00:29:48.202 | 99.99th=[ 197] 00:29:48.202 bw ( KiB/s): min= 508, max= 768, per=3.80%, avg=652.42, stdev=89.03, samples=19 00:29:48.202 iops : min= 127, max= 192, avg=163.05, stdev=22.26, samples=19 00:29:48.202 lat (msec) : 4=0.53%, 10=0.94%, 20=1.52%, 50=0.94%, 100=65.38% 00:29:48.202 lat (msec) : 250=30.70% 00:29:48.202 cpu : usr=31.57%, sys=1.59%, ctx=888, majf=0, minf=1075 00:29:48.202 IO depths : 1=0.1%, 2=4.5%, 4=17.8%, 8=64.3%, 16=13.5%, 32=0.0%, >=64=0.0% 00:29:48.202 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.202 complete : 0=0.0%, 4=92.2%, 8=3.9%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.202 issued rwts: total=1707,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:48.202 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:48.202 filename2: (groupid=0, jobs=1): err= 0: pid=89346: Thu Jul 25 09:11:54 2024 00:29:48.202 read: IOPS=180, BW=722KiB/s (739kB/s)(7252KiB/10042msec) 00:29:48.202 slat (usec): min=5, max=4055, avg=26.88, stdev=164.06 00:29:48.202 clat (msec): min=43, max=191, avg=88.37, stdev=20.37 00:29:48.202 lat (msec): min=43, max=191, avg=88.39, stdev=20.37 00:29:48.202 clat percentiles (msec): 00:29:48.202 | 1.00th=[ 50], 5.00th=[ 59], 10.00th=[ 64], 20.00th=[ 69], 00:29:48.202 | 30.00th=[ 75], 40.00th=[ 86], 50.00th=[ 90], 60.00th=[ 94], 00:29:48.202 | 70.00th=[ 97], 80.00th=[ 104], 90.00th=[ 112], 95.00th=[ 121], 00:29:48.202 | 99.00th=[ 153], 99.50th=[ 167], 99.90th=[ 192], 99.95th=[ 192], 00:29:48.202 | 99.99th=[ 192] 00:29:48.202 bw ( KiB/s): min= 512, max= 816, per=4.20%, avg=720.55, stdev=71.21, samples=20 00:29:48.202 iops : min= 128, max= 204, avg=180.05, stdev=17.80, samples=20 00:29:48.202 lat (msec) : 50=1.10%, 100=74.57%, 250=24.32% 00:29:48.202 cpu : usr=43.19%, sys=2.24%, ctx=1374, majf=0, minf=1075 00:29:48.202 IO depths : 1=0.1%, 2=2.2%, 4=8.8%, 8=74.4%, 16=14.6%, 32=0.0%, >=64=0.0% 00:29:48.202 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.202 complete : 0=0.0%, 4=89.3%, 8=8.8%, 16=1.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.202 issued rwts: total=1813,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:48.202 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:48.202 filename2: (groupid=0, jobs=1): err= 0: pid=89347: Thu Jul 25 09:11:54 2024 00:29:48.202 read: IOPS=171, BW=685KiB/s (701kB/s)(6856KiB/10008msec) 00:29:48.202 slat (usec): min=5, max=8056, avg=33.58, stdev=335.52 00:29:48.202 clat (msec): min=7, max=192, avg=93.22, stdev=23.26 00:29:48.202 lat (msec): min=7, max=192, avg=93.25, stdev=23.25 00:29:48.202 clat percentiles (msec): 00:29:48.202 | 1.00th=[ 33], 5.00th=[ 61], 10.00th=[ 67], 20.00th=[ 72], 00:29:48.202 | 30.00th=[ 84], 40.00th=[ 86], 50.00th=[ 94], 60.00th=[ 96], 00:29:48.202 | 70.00th=[ 101], 80.00th=[ 108], 90.00th=[ 122], 95.00th=[ 132], 00:29:48.202 | 99.00th=[ 171], 99.50th=[ 171], 99.90th=[ 192], 99.95th=[ 192], 00:29:48.202 | 99.99th=[ 192] 00:29:48.203 bw ( KiB/s): min= 512, max= 768, per=3.91%, avg=670.26, stdev=89.59, samples=19 00:29:48.203 iops : min= 128, max= 192, avg=167.53, stdev=22.41, samples=19 00:29:48.203 lat (msec) : 10=0.18%, 20=0.76%, 50=0.93%, 100=67.85%, 250=30.28% 00:29:48.203 cpu : usr=31.74%, sys=1.46%, ctx=1094, majf=0, minf=1073 00:29:48.203 IO depths : 1=0.1%, 2=4.0%, 4=15.8%, 8=66.5%, 16=13.7%, 32=0.0%, >=64=0.0% 00:29:48.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.203 complete : 0=0.0%, 4=91.5%, 8=5.0%, 16=3.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.203 issued rwts: total=1714,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:48.203 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:48.203 filename2: (groupid=0, jobs=1): err= 0: pid=89348: Thu Jul 25 09:11:54 2024 00:29:48.203 read: IOPS=171, BW=687KiB/s (703kB/s)(6876KiB/10009msec) 00:29:48.203 slat (usec): min=5, max=4061, avg=31.67, stdev=217.75 00:29:48.203 clat (msec): min=3, max=198, avg=92.94, stdev=25.79 00:29:48.203 lat (msec): min=3, max=199, avg=92.97, stdev=25.79 00:29:48.203 clat percentiles (msec): 00:29:48.203 | 1.00th=[ 6], 5.00th=[ 56], 10.00th=[ 67], 20.00th=[ 81], 00:29:48.203 | 30.00th=[ 88], 40.00th=[ 91], 50.00th=[ 95], 60.00th=[ 97], 00:29:48.203 | 70.00th=[ 104], 80.00th=[ 108], 90.00th=[ 122], 95.00th=[ 127], 00:29:48.203 | 99.00th=[ 159], 99.50th=[ 186], 99.90th=[ 199], 99.95th=[ 199], 00:29:48.203 | 99.99th=[ 199] 00:29:48.203 bw ( KiB/s): min= 496, max= 768, per=3.80%, avg=651.26, stdev=97.04, samples=19 00:29:48.203 iops : min= 124, max= 192, avg=162.79, stdev=24.23, samples=19 00:29:48.203 lat (msec) : 4=0.52%, 10=1.92%, 20=1.40%, 50=0.93%, 100=60.56% 00:29:48.203 lat (msec) : 250=34.67% 00:29:48.203 cpu : usr=42.29%, sys=1.88%, ctx=1203, majf=0, minf=1075 00:29:48.203 IO depths : 1=0.1%, 2=4.7%, 4=18.6%, 8=63.4%, 16=13.3%, 32=0.0%, >=64=0.0% 00:29:48.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.203 complete : 0=0.0%, 4=92.4%, 8=3.5%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.203 issued rwts: total=1719,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:48.203 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:48.203 filename2: (groupid=0, jobs=1): err= 0: pid=89349: Thu Jul 25 09:11:54 2024 00:29:48.203 read: IOPS=175, BW=703KiB/s (720kB/s)(7048KiB/10030msec) 00:29:48.203 slat (usec): min=5, max=8062, avg=34.37, stdev=307.11 00:29:48.203 clat (msec): min=38, max=155, avg=90.90, stdev=21.34 00:29:48.203 lat (msec): min=38, max=156, avg=90.93, stdev=21.33 00:29:48.203 clat percentiles (msec): 00:29:48.203 | 1.00th=[ 53], 5.00th=[ 61], 10.00th=[ 63], 20.00th=[ 71], 00:29:48.203 | 30.00th=[ 83], 40.00th=[ 85], 50.00th=[ 88], 60.00th=[ 96], 00:29:48.203 | 70.00th=[ 99], 80.00th=[ 108], 90.00th=[ 121], 95.00th=[ 130], 00:29:48.203 | 99.00th=[ 153], 99.50th=[ 157], 99.90th=[ 157], 99.95th=[ 157], 00:29:48.203 | 99.99th=[ 157] 00:29:48.203 bw ( KiB/s): min= 512, max= 768, per=4.07%, avg=698.20, stdev=82.99, samples=20 00:29:48.203 iops : min= 128, max= 192, avg=174.50, stdev=20.75, samples=20 00:29:48.203 lat (msec) : 50=0.57%, 100=71.57%, 250=27.87% 00:29:48.203 cpu : usr=33.29%, sys=1.55%, ctx=891, majf=0, minf=1073 00:29:48.203 IO depths : 1=0.1%, 2=2.8%, 4=11.3%, 8=71.6%, 16=14.2%, 32=0.0%, >=64=0.0% 00:29:48.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.203 complete : 0=0.0%, 4=90.1%, 8=7.5%, 16=2.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.203 issued rwts: total=1762,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:48.203 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:48.203 00:29:48.203 Run status group 0 (all jobs): 00:29:48.203 READ: bw=16.7MiB/s (17.6MB/s), 657KiB/s-773KiB/s (672kB/s-791kB/s), io=169MiB (177MB), run=10008-10077msec 00:29:48.462 ----------------------------------------------------- 00:29:48.462 Suppressions used: 00:29:48.462 count bytes template 00:29:48.462 45 402 /usr/src/fio/parse.c 00:29:48.462 1 8 libtcmalloc_minimal.so 00:29:48.462 1 904 libcrypto.so 00:29:48.462 ----------------------------------------------------- 00:29:48.462 00:29:48.462 09:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:29:48.462 09:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:29:48.462 09:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:48.462 09:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:48.462 09:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:29:48.462 09:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:48.462 09:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.462 09:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:48.463 bdev_null0 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:48.463 [2024-07-25 09:11:55.465356] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:48.463 bdev_null1 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:48.463 { 00:29:48.463 "params": { 00:29:48.463 "name": "Nvme$subsystem", 00:29:48.463 "trtype": "$TEST_TRANSPORT", 00:29:48.463 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:48.463 "adrfam": "ipv4", 00:29:48.463 "trsvcid": "$NVMF_PORT", 00:29:48.463 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:48.463 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:48.463 "hdgst": ${hdgst:-false}, 00:29:48.463 "ddgst": ${ddgst:-false} 00:29:48.463 }, 00:29:48.463 "method": "bdev_nvme_attach_controller" 00:29:48.463 } 00:29:48.463 EOF 00:29:48.463 )") 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:48.463 09:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:29:48.464 09:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:48.464 09:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:29:48.464 09:11:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:48.464 09:11:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:48.464 { 00:29:48.464 "params": { 00:29:48.464 "name": "Nvme$subsystem", 00:29:48.464 "trtype": "$TEST_TRANSPORT", 00:29:48.464 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:48.464 "adrfam": "ipv4", 00:29:48.464 "trsvcid": "$NVMF_PORT", 00:29:48.464 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:48.464 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:48.464 "hdgst": ${hdgst:-false}, 00:29:48.464 "ddgst": ${ddgst:-false} 00:29:48.464 }, 00:29:48.464 "method": "bdev_nvme_attach_controller" 00:29:48.464 } 00:29:48.464 EOF 00:29:48.464 )") 00:29:48.464 09:11:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:48.464 09:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:29:48.464 09:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:48.464 09:11:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:29:48.464 09:11:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:29:48.464 09:11:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:48.464 "params": { 00:29:48.464 "name": "Nvme0", 00:29:48.464 "trtype": "tcp", 00:29:48.464 "traddr": "10.0.0.2", 00:29:48.464 "adrfam": "ipv4", 00:29:48.464 "trsvcid": "4420", 00:29:48.464 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:48.464 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:48.464 "hdgst": false, 00:29:48.464 "ddgst": false 00:29:48.464 }, 00:29:48.464 "method": "bdev_nvme_attach_controller" 00:29:48.464 },{ 00:29:48.464 "params": { 00:29:48.464 "name": "Nvme1", 00:29:48.464 "trtype": "tcp", 00:29:48.464 "traddr": "10.0.0.2", 00:29:48.464 "adrfam": "ipv4", 00:29:48.464 "trsvcid": "4420", 00:29:48.464 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:48.464 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:48.464 "hdgst": false, 00:29:48.464 "ddgst": false 00:29:48.464 }, 00:29:48.464 "method": "bdev_nvme_attach_controller" 00:29:48.464 }' 00:29:48.464 09:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:29:48.464 09:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:29:48.464 09:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # break 00:29:48.464 09:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:29:48.464 09:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:48.722 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:29:48.722 ... 00:29:48.722 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:29:48.722 ... 00:29:48.722 fio-3.35 00:29:48.722 Starting 4 threads 00:29:55.321 00:29:55.321 filename0: (groupid=0, jobs=1): err= 0: pid=89489: Thu Jul 25 09:12:01 2024 00:29:55.321 read: IOPS=1382, BW=10.8MiB/s (11.3MB/s)(54.0MiB/5005msec) 00:29:55.321 slat (nsec): min=5913, max=66713, avg=15875.73, stdev=5798.15 00:29:55.321 clat (usec): min=2072, max=12837, avg=5720.73, stdev=620.82 00:29:55.321 lat (usec): min=2083, max=12858, avg=5736.60, stdev=619.71 00:29:55.321 clat percentiles (usec): 00:29:55.321 | 1.00th=[ 4113], 5.00th=[ 5014], 10.00th=[ 5080], 20.00th=[ 5145], 00:29:55.321 | 30.00th=[ 5211], 40.00th=[ 5866], 50.00th=[ 5932], 60.00th=[ 5997], 00:29:55.321 | 70.00th=[ 6063], 80.00th=[ 6128], 90.00th=[ 6194], 95.00th=[ 6259], 00:29:55.321 | 99.00th=[ 7111], 99.50th=[ 7373], 99.90th=[12649], 99.95th=[12780], 00:29:55.321 | 99.99th=[12780] 00:29:55.321 bw ( KiB/s): min=10368, max=12288, per=20.51%, avg=11008.00, stdev=783.84, samples=9 00:29:55.321 iops : min= 1296, max= 1536, avg=1376.00, stdev=97.98, samples=9 00:29:55.321 lat (msec) : 4=0.27%, 10=99.49%, 20=0.23% 00:29:55.321 cpu : usr=90.83%, sys=8.09%, ctx=185, majf=0, minf=1074 00:29:55.321 IO depths : 1=0.1%, 2=23.8%, 4=51.0%, 8=25.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:55.321 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:55.321 complete : 0=0.0%, 4=90.5%, 8=9.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:55.321 issued rwts: total=6918,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:55.321 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:55.321 filename0: (groupid=0, jobs=1): err= 0: pid=89490: Thu Jul 25 09:12:01 2024 00:29:55.321 read: IOPS=1884, BW=14.7MiB/s (15.4MB/s)(73.7MiB/5004msec) 00:29:55.321 slat (nsec): min=5769, max=64197, avg=17009.39, stdev=5474.21 00:29:55.321 clat (usec): min=1032, max=7741, avg=4205.27, stdev=1353.43 00:29:55.321 lat (usec): min=1046, max=7764, avg=4222.28, stdev=1353.38 00:29:55.321 clat percentiles (usec): 00:29:55.321 | 1.00th=[ 1713], 5.00th=[ 1762], 10.00th=[ 2474], 20.00th=[ 2638], 00:29:55.321 | 30.00th=[ 3621], 40.00th=[ 3720], 50.00th=[ 4015], 60.00th=[ 4883], 00:29:55.321 | 70.00th=[ 5211], 80.00th=[ 5735], 90.00th=[ 5932], 95.00th=[ 5997], 00:29:55.321 | 99.00th=[ 6194], 99.50th=[ 6390], 99.90th=[ 6521], 99.95th=[ 6587], 00:29:55.321 | 99.99th=[ 7767] 00:29:55.321 bw ( KiB/s): min=14256, max=16817, per=28.32%, avg=15198.33, stdev=1133.29, samples=9 00:29:55.321 iops : min= 1782, max= 2102, avg=1899.78, stdev=141.64, samples=9 00:29:55.321 lat (msec) : 2=8.71%, 4=41.11%, 10=50.18% 00:29:55.321 cpu : usr=91.84%, sys=7.14%, ctx=10, majf=0, minf=1074 00:29:55.321 IO depths : 1=0.1%, 2=0.1%, 4=63.9%, 8=36.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:55.321 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:55.321 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:55.321 issued rwts: total=9428,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:55.321 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:55.321 filename1: (groupid=0, jobs=1): err= 0: pid=89491: Thu Jul 25 09:12:01 2024 00:29:55.321 read: IOPS=1721, BW=13.4MiB/s (14.1MB/s)(67.3MiB/5003msec) 00:29:55.321 slat (nsec): min=5778, max=67975, avg=18052.68, stdev=5349.41 00:29:55.321 clat (usec): min=1597, max=8252, avg=4596.51, stdev=1182.12 00:29:55.322 lat (usec): min=1609, max=8275, avg=4614.56, stdev=1182.56 00:29:55.322 clat percentiles (usec): 00:29:55.322 | 1.00th=[ 2474], 5.00th=[ 2540], 10.00th=[ 2606], 20.00th=[ 3523], 00:29:55.322 | 30.00th=[ 3720], 40.00th=[ 4146], 50.00th=[ 5080], 60.00th=[ 5211], 00:29:55.322 | 70.00th=[ 5342], 80.00th=[ 5800], 90.00th=[ 5932], 95.00th=[ 6063], 00:29:55.322 | 99.00th=[ 6259], 99.50th=[ 6456], 99.90th=[ 6783], 99.95th=[ 7963], 00:29:55.322 | 99.99th=[ 8225] 00:29:55.322 bw ( KiB/s): min=12160, max=14624, per=25.64%, avg=13758.22, stdev=1057.59, samples=9 00:29:55.322 iops : min= 1520, max= 1828, avg=1719.78, stdev=132.20, samples=9 00:29:55.322 lat (msec) : 2=0.13%, 4=38.34%, 10=61.53% 00:29:55.322 cpu : usr=92.30%, sys=6.72%, ctx=7, majf=0, minf=1073 00:29:55.322 IO depths : 1=0.1%, 2=6.3%, 4=60.5%, 8=33.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:55.322 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:55.322 complete : 0=0.0%, 4=97.6%, 8=2.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:55.322 issued rwts: total=8613,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:55.322 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:55.322 filename1: (groupid=0, jobs=1): err= 0: pid=89492: Thu Jul 25 09:12:01 2024 00:29:55.322 read: IOPS=1721, BW=13.5MiB/s (14.1MB/s)(67.3MiB/5002msec) 00:29:55.322 slat (usec): min=5, max=107, avg=18.20, stdev= 5.13 00:29:55.322 clat (usec): min=1575, max=7290, avg=4594.50, stdev=1186.08 00:29:55.322 lat (usec): min=1590, max=7313, avg=4612.70, stdev=1185.49 00:29:55.322 clat percentiles (usec): 00:29:55.322 | 1.00th=[ 2442], 5.00th=[ 2540], 10.00th=[ 2573], 20.00th=[ 3523], 00:29:55.322 | 30.00th=[ 3720], 40.00th=[ 4146], 50.00th=[ 5080], 60.00th=[ 5211], 00:29:55.322 | 70.00th=[ 5342], 80.00th=[ 5800], 90.00th=[ 5932], 95.00th=[ 6063], 00:29:55.322 | 99.00th=[ 6259], 99.50th=[ 6456], 99.90th=[ 6783], 99.95th=[ 6980], 00:29:55.322 | 99.99th=[ 7308] 00:29:55.322 bw ( KiB/s): min=12160, max=14624, per=25.64%, avg=13760.89, stdev=1053.08, samples=9 00:29:55.322 iops : min= 1520, max= 1828, avg=1720.11, stdev=131.63, samples=9 00:29:55.322 lat (msec) : 2=0.17%, 4=38.31%, 10=61.51% 00:29:55.322 cpu : usr=91.46%, sys=7.60%, ctx=13, majf=0, minf=1075 00:29:55.322 IO depths : 1=0.1%, 2=6.3%, 4=60.5%, 8=33.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:55.322 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:55.322 complete : 0=0.0%, 4=97.6%, 8=2.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:55.322 issued rwts: total=8613,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:55.322 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:55.322 00:29:55.322 Run status group 0 (all jobs): 00:29:55.322 READ: bw=52.4MiB/s (54.9MB/s), 10.8MiB/s-14.7MiB/s (11.3MB/s-15.4MB/s), io=262MiB (275MB), run=5002-5005msec 00:29:55.889 ----------------------------------------------------- 00:29:55.889 Suppressions used: 00:29:55.889 count bytes template 00:29:55.889 6 52 /usr/src/fio/parse.c 00:29:55.889 1 8 libtcmalloc_minimal.so 00:29:55.889 1 904 libcrypto.so 00:29:55.889 ----------------------------------------------------- 00:29:55.889 00:29:55.889 09:12:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:29:55.889 09:12:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:29:55.889 09:12:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:55.889 09:12:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:55.889 09:12:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:29:55.889 09:12:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:55.889 09:12:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.889 09:12:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:55.889 09:12:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.889 09:12:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:55.889 09:12:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.889 09:12:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:55.889 09:12:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.889 09:12:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:55.889 09:12:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:29:55.889 09:12:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:29:55.889 09:12:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:55.889 09:12:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.889 09:12:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:55.889 09:12:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.889 09:12:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:29:55.889 09:12:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.889 09:12:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:55.889 09:12:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.889 00:29:55.889 real 0m27.567s 00:29:55.889 user 2m7.241s 00:29:55.889 sys 0m8.374s 00:29:55.889 09:12:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:55.889 09:12:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:55.889 ************************************ 00:29:55.889 END TEST fio_dif_rand_params 00:29:55.889 ************************************ 00:29:55.889 09:12:02 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:29:55.889 09:12:02 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:55.889 09:12:02 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:55.889 09:12:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:55.889 ************************************ 00:29:55.889 START TEST fio_dif_digest 00:29:55.889 ************************************ 00:29:55.889 09:12:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:29:55.889 09:12:02 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:29:55.889 09:12:02 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:29:55.889 09:12:02 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:29:55.889 09:12:02 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:29:55.889 09:12:02 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:29:55.889 09:12:02 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:29:55.889 09:12:02 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:29:55.889 09:12:02 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:29:55.889 09:12:02 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:29:55.889 09:12:02 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:29:55.889 09:12:02 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:29:55.889 09:12:02 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:29:55.889 09:12:02 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:29:55.889 09:12:02 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:29:55.889 09:12:02 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:29:55.889 09:12:02 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:29:55.889 09:12:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.889 09:12:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:55.889 bdev_null0 00:29:55.889 09:12:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.889 09:12:02 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:55.889 09:12:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.889 09:12:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:55.889 09:12:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.889 09:12:02 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:55.889 09:12:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.889 09:12:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:55.889 09:12:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.889 09:12:02 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:55.889 09:12:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.889 09:12:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:55.889 [2024-07-25 09:12:02.916443] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:55.889 09:12:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.889 09:12:02 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:29:55.889 09:12:02 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:29:55.889 09:12:02 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:29:55.889 09:12:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:29:55.889 09:12:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:29:55.889 09:12:02 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:55.889 09:12:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:55.889 09:12:02 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:29:55.890 09:12:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:55.890 09:12:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:55.890 { 00:29:55.890 "params": { 00:29:55.890 "name": "Nvme$subsystem", 00:29:55.890 "trtype": "$TEST_TRANSPORT", 00:29:55.890 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:55.890 "adrfam": "ipv4", 00:29:55.890 "trsvcid": "$NVMF_PORT", 00:29:55.890 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:55.890 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:55.890 "hdgst": ${hdgst:-false}, 00:29:55.890 "ddgst": ${ddgst:-false} 00:29:55.890 }, 00:29:55.890 "method": "bdev_nvme_attach_controller" 00:29:55.890 } 00:29:55.890 EOF 00:29:55.890 )") 00:29:55.890 09:12:02 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:29:55.890 09:12:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:55.890 09:12:02 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:29:55.890 09:12:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:55.890 09:12:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:55.890 09:12:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:55.890 09:12:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:29:55.890 09:12:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:55.890 09:12:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:55.890 09:12:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:29:55.890 09:12:02 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:29:55.890 09:12:02 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:29:55.890 09:12:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:55.890 09:12:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:29:55.890 09:12:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:55.890 09:12:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:29:55.890 09:12:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:29:55.890 09:12:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:55.890 "params": { 00:29:55.890 "name": "Nvme0", 00:29:55.890 "trtype": "tcp", 00:29:55.890 "traddr": "10.0.0.2", 00:29:55.890 "adrfam": "ipv4", 00:29:55.890 "trsvcid": "4420", 00:29:55.890 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:55.890 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:55.890 "hdgst": true, 00:29:55.890 "ddgst": true 00:29:55.890 }, 00:29:55.890 "method": "bdev_nvme_attach_controller" 00:29:55.890 }' 00:29:55.890 09:12:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:29:55.890 09:12:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:29:55.890 09:12:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # break 00:29:55.890 09:12:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:29:55.890 09:12:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:56.148 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:29:56.148 ... 00:29:56.148 fio-3.35 00:29:56.148 Starting 3 threads 00:30:08.373 00:30:08.373 filename0: (groupid=0, jobs=1): err= 0: pid=89602: Thu Jul 25 09:12:13 2024 00:30:08.373 read: IOPS=189, BW=23.7MiB/s (24.9MB/s)(238MiB/10011msec) 00:30:08.373 slat (nsec): min=6486, max=84665, avg=21468.46, stdev=8160.99 00:30:08.373 clat (usec): min=13780, max=20568, avg=15738.70, stdev=734.95 00:30:08.373 lat (usec): min=13794, max=20625, avg=15760.17, stdev=735.50 00:30:08.373 clat percentiles (usec): 00:30:08.373 | 1.00th=[14091], 5.00th=[14746], 10.00th=[14877], 20.00th=[15139], 00:30:08.373 | 30.00th=[15401], 40.00th=[15533], 50.00th=[15664], 60.00th=[15795], 00:30:08.373 | 70.00th=[16057], 80.00th=[16450], 90.00th=[16712], 95.00th=[16909], 00:30:08.373 | 99.00th=[17433], 99.50th=[17433], 99.90th=[20579], 99.95th=[20579], 00:30:08.373 | 99.99th=[20579] 00:30:08.373 bw ( KiB/s): min=23040, max=25344, per=33.36%, avg=24330.79, stdev=722.60, samples=19 00:30:08.373 iops : min= 180, max= 198, avg=190.05, stdev= 5.60, samples=19 00:30:08.373 lat (msec) : 20=99.74%, 50=0.26% 00:30:08.373 cpu : usr=92.51%, sys=6.82%, ctx=11, majf=0, minf=1074 00:30:08.373 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:08.373 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:08.373 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:08.373 issued rwts: total=1902,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:08.373 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:08.373 filename0: (groupid=0, jobs=1): err= 0: pid=89603: Thu Jul 25 09:12:14 2024 00:30:08.373 read: IOPS=190, BW=23.8MiB/s (24.9MB/s)(238MiB/10009msec) 00:30:08.373 slat (nsec): min=6431, max=84599, avg=20915.73, stdev=7950.49 00:30:08.373 clat (usec): min=13792, max=20576, avg=15737.79, stdev=722.18 00:30:08.373 lat (usec): min=13806, max=20635, avg=15758.71, stdev=722.86 00:30:08.373 clat percentiles (usec): 00:30:08.373 | 1.00th=[14091], 5.00th=[14746], 10.00th=[14877], 20.00th=[15139], 00:30:08.373 | 30.00th=[15401], 40.00th=[15533], 50.00th=[15664], 60.00th=[15795], 00:30:08.373 | 70.00th=[16057], 80.00th=[16450], 90.00th=[16712], 95.00th=[16909], 00:30:08.373 | 99.00th=[17433], 99.50th=[17433], 99.90th=[20579], 99.95th=[20579], 00:30:08.373 | 99.99th=[20579] 00:30:08.373 bw ( KiB/s): min=23040, max=25394, per=33.37%, avg=24336.11, stdev=730.40, samples=19 00:30:08.373 iops : min= 180, max= 198, avg=190.11, stdev= 5.68, samples=19 00:30:08.373 lat (msec) : 20=99.84%, 50=0.16% 00:30:08.373 cpu : usr=92.42%, sys=6.96%, ctx=20, majf=0, minf=1073 00:30:08.373 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:08.373 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:08.373 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:08.373 issued rwts: total=1902,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:08.373 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:08.373 filename0: (groupid=0, jobs=1): err= 0: pid=89604: Thu Jul 25 09:12:14 2024 00:30:08.373 read: IOPS=189, BW=23.7MiB/s (24.9MB/s)(238MiB/10014msec) 00:30:08.373 slat (nsec): min=6540, max=74972, avg=21176.46, stdev=8477.09 00:30:08.373 clat (usec): min=13793, max=23379, avg=15744.93, stdev=778.32 00:30:08.373 lat (usec): min=13808, max=23419, avg=15766.10, stdev=779.04 00:30:08.373 clat percentiles (usec): 00:30:08.373 | 1.00th=[14091], 5.00th=[14746], 10.00th=[14877], 20.00th=[15139], 00:30:08.373 | 30.00th=[15401], 40.00th=[15533], 50.00th=[15664], 60.00th=[15795], 00:30:08.373 | 70.00th=[16057], 80.00th=[16450], 90.00th=[16712], 95.00th=[16909], 00:30:08.373 | 99.00th=[17433], 99.50th=[17433], 99.90th=[23462], 99.95th=[23462], 00:30:08.373 | 99.99th=[23462] 00:30:08.373 bw ( KiB/s): min=23808, max=25344, per=33.33%, avg=24307.20, stdev=515.19, samples=20 00:30:08.373 iops : min= 186, max= 198, avg=189.90, stdev= 4.02, samples=20 00:30:08.373 lat (msec) : 20=99.68%, 50=0.32% 00:30:08.373 cpu : usr=92.35%, sys=6.98%, ctx=128, majf=0, minf=1075 00:30:08.373 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:08.373 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:08.373 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:08.373 issued rwts: total=1902,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:08.374 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:08.374 00:30:08.374 Run status group 0 (all jobs): 00:30:08.374 READ: bw=71.2MiB/s (74.7MB/s), 23.7MiB/s-23.8MiB/s (24.9MB/s-24.9MB/s), io=713MiB (748MB), run=10009-10014msec 00:30:08.374 ----------------------------------------------------- 00:30:08.374 Suppressions used: 00:30:08.374 count bytes template 00:30:08.374 5 44 /usr/src/fio/parse.c 00:30:08.374 1 8 libtcmalloc_minimal.so 00:30:08.374 1 904 libcrypto.so 00:30:08.374 ----------------------------------------------------- 00:30:08.374 00:30:08.374 09:12:15 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:30:08.374 09:12:15 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:30:08.374 09:12:15 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:30:08.374 09:12:15 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:08.374 09:12:15 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:30:08.374 09:12:15 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:08.374 09:12:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.374 09:12:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:08.374 09:12:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.374 09:12:15 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:08.374 09:12:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.374 09:12:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:08.374 09:12:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.374 00:30:08.374 real 0m12.527s 00:30:08.374 user 0m29.812s 00:30:08.374 sys 0m2.497s 00:30:08.374 09:12:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:08.374 ************************************ 00:30:08.374 END TEST fio_dif_digest 00:30:08.374 ************************************ 00:30:08.374 09:12:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:08.374 09:12:15 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:30:08.374 09:12:15 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:30:08.374 09:12:15 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:08.374 09:12:15 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:30:08.635 09:12:15 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:08.635 09:12:15 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:30:08.635 09:12:15 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:08.635 09:12:15 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:08.635 rmmod nvme_tcp 00:30:08.635 rmmod nvme_fabrics 00:30:08.635 rmmod nvme_keyring 00:30:08.635 09:12:15 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:08.635 09:12:15 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:30:08.635 09:12:15 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:30:08.635 09:12:15 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 88837 ']' 00:30:08.635 09:12:15 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 88837 00:30:08.635 09:12:15 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 88837 ']' 00:30:08.635 09:12:15 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 88837 00:30:08.635 09:12:15 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:30:08.635 09:12:15 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:08.635 09:12:15 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88837 00:30:08.635 09:12:15 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:08.635 killing process with pid 88837 00:30:08.635 09:12:15 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:08.635 09:12:15 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88837' 00:30:08.635 09:12:15 nvmf_dif -- common/autotest_common.sh@969 -- # kill 88837 00:30:08.635 09:12:15 nvmf_dif -- common/autotest_common.sh@974 -- # wait 88837 00:30:10.010 09:12:16 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:30:10.010 09:12:16 nvmf_dif -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:30:10.010 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:10.268 Waiting for block devices as requested 00:30:10.268 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:30:10.268 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:30:10.268 09:12:17 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:10.268 09:12:17 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:10.268 09:12:17 nvmf_dif -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:10.268 09:12:17 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:10.268 09:12:17 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:10.268 09:12:17 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:10.268 09:12:17 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:10.527 09:12:17 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:30:10.527 00:30:10.527 real 1m9.157s 00:30:10.527 user 4m5.253s 00:30:10.527 sys 0m19.750s 00:30:10.527 09:12:17 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:10.527 ************************************ 00:30:10.527 09:12:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:10.527 END TEST nvmf_dif 00:30:10.527 ************************************ 00:30:10.527 09:12:17 -- spdk/autotest.sh@297 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:30:10.527 09:12:17 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:10.527 09:12:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:10.527 09:12:17 -- common/autotest_common.sh@10 -- # set +x 00:30:10.527 ************************************ 00:30:10.527 START TEST nvmf_abort_qd_sizes 00:30:10.527 ************************************ 00:30:10.527 09:12:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:30:10.527 * Looking for test storage... 00:30:10.527 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:30:10.527 09:12:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:10.527 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:30:10.527 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:10.527 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:10.527 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:10.527 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:10.527 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:10.527 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:10.527 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:10.527 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:10.527 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:10.527 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:10.527 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:30:10.527 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:30:10.527 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:10.527 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:10.527 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:30:10.527 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:10.527 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:10.527 09:12:17 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:10.527 09:12:17 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:10.527 09:12:17 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:10.527 09:12:17 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.527 09:12:17 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.528 09:12:17 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.528 09:12:17 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:30:10.528 09:12:17 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.528 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:30:10.528 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:10.528 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:10.528 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:10.528 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:10.528 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:10.528 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:10.528 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:10.528 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:10.528 09:12:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:30:10.528 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:10.528 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:10.528 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:10.528 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:10.528 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:10.528 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:10.528 09:12:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:10.528 09:12:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:10.528 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:30:10.528 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:30:10.528 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:30:10.528 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:30:10.528 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:30:10.528 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # nvmf_veth_init 00:30:10.528 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:10.528 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:10.528 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:30:10.528 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:30:10.528 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:30:10.528 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:30:10.528 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:30:10.528 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:10.528 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:30:10.528 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:30:10.528 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:30:10.528 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:30:10.528 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:30:10.528 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:30:10.528 Cannot find device "nvmf_tgt_br" 00:30:10.528 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # true 00:30:10.528 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:30:10.528 Cannot find device "nvmf_tgt_br2" 00:30:10.528 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # true 00:30:10.528 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:30:10.528 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:30:10.528 Cannot find device "nvmf_tgt_br" 00:30:10.528 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # true 00:30:10.528 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:30:10.786 Cannot find device "nvmf_tgt_br2" 00:30:10.786 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # true 00:30:10.786 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:30:10.786 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:30:10.786 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:10.786 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:10.786 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:30:10.786 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:10.786 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:10.786 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:30:10.786 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:30:10.786 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:30:10.787 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:30:10.787 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:30:10.787 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:30:10.787 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:30:10.787 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:30:10.787 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:30:10.787 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:30:10.787 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:30:10.787 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:30:10.787 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:30:10.787 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:30:10.787 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:30:10.787 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:30:10.787 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:30:10.787 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:30:10.787 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:30:10.787 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:30:10.787 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:30:11.045 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:30:11.045 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:30:11.045 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:30:11.045 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:30:11.045 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:11.045 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:30:11.045 00:30:11.045 --- 10.0.0.2 ping statistics --- 00:30:11.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:11.045 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:30:11.045 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:30:11.045 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:30:11.045 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:30:11.045 00:30:11.045 --- 10.0.0.3 ping statistics --- 00:30:11.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:11.045 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:30:11.045 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:30:11.045 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:11.045 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:30:11.045 00:30:11.045 --- 10.0.0.1 ping statistics --- 00:30:11.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:11.045 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:30:11.045 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:11.045 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@433 -- # return 0 00:30:11.045 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:30:11.045 09:12:17 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:30:11.612 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:11.612 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:30:11.872 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:30:11.872 09:12:18 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:11.872 09:12:18 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:11.872 09:12:18 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:11.872 09:12:18 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:11.872 09:12:18 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:11.872 09:12:18 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:11.872 09:12:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:30:11.872 09:12:18 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:11.872 09:12:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:11.872 09:12:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:11.872 09:12:18 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=90217 00:30:11.872 09:12:18 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 90217 00:30:11.872 09:12:18 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:30:11.872 09:12:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 90217 ']' 00:30:11.872 09:12:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:11.872 09:12:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:11.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:11.872 09:12:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:11.872 09:12:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:11.872 09:12:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:11.872 [2024-07-25 09:12:18.950869] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:30:11.872 [2024-07-25 09:12:18.951014] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:12.130 [2024-07-25 09:12:19.121956] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:12.408 [2024-07-25 09:12:19.421715] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:12.409 [2024-07-25 09:12:19.422236] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:12.409 [2024-07-25 09:12:19.422275] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:12.409 [2024-07-25 09:12:19.422296] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:12.409 [2024-07-25 09:12:19.422314] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:12.409 [2024-07-25 09:12:19.422475] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:12.409 [2024-07-25 09:12:19.423110] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:12.409 [2024-07-25 09:12:19.423185] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:12.409 [2024-07-25 09:12:19.423194] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:12.666 [2024-07-25 09:12:19.634708] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:30:12.924 09:12:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:12.924 09:12:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:30:12.924 09:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:12.924 09:12:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:12.924 09:12:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:12.924 09:12:19 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:12.924 09:12:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:30:12.924 09:12:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:30:12.924 09:12:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:30:12.924 09:12:19 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:30:12.924 09:12:19 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:30:12.924 09:12:19 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n '' ]] 00:30:12.924 09:12:19 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:30:12.924 09:12:19 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:30:12.924 09:12:19 nvmf_abort_qd_sizes -- scripts/common.sh@295 -- # local bdf= 00:30:12.924 09:12:19 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:30:12.924 09:12:19 nvmf_abort_qd_sizes -- scripts/common.sh@230 -- # local class 00:30:12.924 09:12:19 nvmf_abort_qd_sizes -- scripts/common.sh@231 -- # local subclass 00:30:12.924 09:12:19 nvmf_abort_qd_sizes -- scripts/common.sh@232 -- # local progif 00:30:12.924 09:12:19 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # printf %02x 1 00:30:12.924 09:12:19 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # class=01 00:30:12.924 09:12:19 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # printf %02x 8 00:30:12.924 09:12:19 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # subclass=08 00:30:12.924 09:12:19 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # printf %02x 2 00:30:12.924 09:12:19 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # progif=02 00:30:12.924 09:12:19 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # hash lspci 00:30:12.924 09:12:19 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:30:12.924 09:12:19 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # grep -i -- -p02 00:30:12.924 09:12:19 nvmf_abort_qd_sizes -- scripts/common.sh@239 -- # lspci -mm -n -D 00:30:12.924 09:12:19 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:30:12.924 09:12:19 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # tr -d '"' 00:30:12.924 09:12:19 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:30:12.924 09:12:19 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:30:12.924 09:12:19 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:30:12.924 09:12:19 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:30:12.924 09:12:19 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:30:12.924 09:12:19 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:30:12.924 09:12:19 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:30:12.924 09:12:19 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:30:12.924 09:12:19 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:30:12.924 09:12:19 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:30:12.924 09:12:19 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:30:12.924 09:12:19 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:30:12.924 09:12:19 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:30:12.924 09:12:19 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:30:12.924 09:12:19 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:30:12.924 09:12:19 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:30:12.924 09:12:19 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:30:12.924 09:12:19 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:30:12.924 09:12:19 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:30:12.924 09:12:19 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:30:12.924 09:12:19 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:30:12.924 09:12:19 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:30:12.924 09:12:19 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:30:12.924 09:12:19 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:30:12.924 09:12:19 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 2 )) 00:30:12.924 09:12:19 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:30:12.924 09:12:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:30:12.924 09:12:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:30:12.924 09:12:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:30:12.924 09:12:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:12.924 09:12:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:12.924 09:12:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:12.924 ************************************ 00:30:12.924 START TEST spdk_target_abort 00:30:12.924 ************************************ 00:30:12.925 09:12:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:30:12.925 09:12:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:30:12.925 09:12:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:30:12.925 09:12:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:12.925 09:12:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:12.925 spdk_targetn1 00:30:12.925 09:12:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:12.925 09:12:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:12.925 09:12:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:12.925 09:12:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:13.206 [2024-07-25 09:12:20.038159] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:13.206 09:12:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:13.206 09:12:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:30:13.206 09:12:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:13.206 09:12:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:13.206 09:12:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:13.206 09:12:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:30:13.206 09:12:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:13.206 09:12:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:13.206 09:12:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:13.206 09:12:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:30:13.206 09:12:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:13.206 09:12:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:13.206 [2024-07-25 09:12:20.083451] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:13.206 09:12:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:13.206 09:12:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:30:13.206 09:12:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:30:13.206 09:12:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:30:13.207 09:12:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:30:13.207 09:12:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:30:13.207 09:12:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:30:13.207 09:12:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:30:13.207 09:12:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:30:13.207 09:12:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:30:13.207 09:12:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:13.207 09:12:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:30:13.207 09:12:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:13.207 09:12:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:30:13.207 09:12:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:13.207 09:12:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:30:13.207 09:12:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:13.207 09:12:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:13.207 09:12:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:13.207 09:12:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:13.207 09:12:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:13.207 09:12:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:16.531 Initializing NVMe Controllers 00:30:16.531 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:30:16.531 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:16.531 Initialization complete. Launching workers. 00:30:16.531 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 7879, failed: 0 00:30:16.531 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1027, failed to submit 6852 00:30:16.531 success 827, unsuccess 200, failed 0 00:30:16.531 09:12:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:16.531 09:12:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:19.816 Initializing NVMe Controllers 00:30:19.816 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:30:19.816 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:19.816 Initialization complete. Launching workers. 00:30:19.816 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8928, failed: 0 00:30:19.816 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1200, failed to submit 7728 00:30:19.816 success 364, unsuccess 836, failed 0 00:30:19.816 09:12:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:19.816 09:12:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:24.004 Initializing NVMe Controllers 00:30:24.004 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:30:24.004 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:24.004 Initialization complete. Launching workers. 00:30:24.004 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 25283, failed: 0 00:30:24.004 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2249, failed to submit 23034 00:30:24.004 success 235, unsuccess 2014, failed 0 00:30:24.004 09:12:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:30:24.004 09:12:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:24.004 09:12:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:24.004 09:12:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:24.004 09:12:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:30:24.004 09:12:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:24.004 09:12:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:24.004 09:12:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:24.004 09:12:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 90217 00:30:24.004 09:12:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 90217 ']' 00:30:24.004 09:12:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 90217 00:30:24.004 09:12:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:30:24.004 09:12:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:24.004 09:12:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90217 00:30:24.004 killing process with pid 90217 00:30:24.004 09:12:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:24.004 09:12:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:24.004 09:12:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90217' 00:30:24.004 09:12:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 90217 00:30:24.004 09:12:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 90217 00:30:24.589 ************************************ 00:30:24.589 END TEST spdk_target_abort 00:30:24.589 ************************************ 00:30:24.589 00:30:24.589 real 0m11.535s 00:30:24.589 user 0m44.773s 00:30:24.589 sys 0m2.345s 00:30:24.589 09:12:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:24.589 09:12:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:24.589 09:12:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:30:24.589 09:12:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:24.589 09:12:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:24.590 09:12:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:24.590 ************************************ 00:30:24.590 START TEST kernel_target_abort 00:30:24.590 ************************************ 00:30:24.590 09:12:31 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:30:24.590 09:12:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:30:24.590 09:12:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:30:24.590 09:12:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:24.590 09:12:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:24.590 09:12:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:24.590 09:12:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:24.590 09:12:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:24.590 09:12:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:24.590 09:12:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:24.590 09:12:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:24.590 09:12:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:24.590 09:12:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:30:24.590 09:12:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:30:24.590 09:12:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:30:24.590 09:12:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:24.590 09:12:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:24.590 09:12:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:30:24.590 09:12:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:30:24.590 09:12:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:30:24.590 09:12:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:30:24.590 09:12:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:30:24.590 09:12:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:30:24.849 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:24.849 Waiting for block devices as requested 00:30:25.108 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:30:25.108 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:30:25.674 09:12:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:30:25.674 09:12:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:30:25.674 09:12:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:30:25.674 09:12:32 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:30:25.674 09:12:32 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:30:25.674 09:12:32 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:30:25.674 09:12:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:30:25.675 09:12:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:30:25.675 09:12:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:30:25.675 No valid GPT data, bailing 00:30:25.675 09:12:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:30:25.675 09:12:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:30:25.675 09:12:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:30:25.675 09:12:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:30:25.675 09:12:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:30:25.675 09:12:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:30:25.675 09:12:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:30:25.675 09:12:32 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:30:25.675 09:12:32 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:30:25.675 09:12:32 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:30:25.675 09:12:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:30:25.675 09:12:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:30:25.675 09:12:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:30:25.934 No valid GPT data, bailing 00:30:25.934 09:12:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:30:25.934 09:12:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:30:25.934 09:12:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:30:25.934 09:12:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:30:25.934 09:12:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:30:25.934 09:12:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:30:25.934 09:12:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:30:25.934 09:12:32 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:30:25.934 09:12:32 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:30:25.934 09:12:32 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:30:25.934 09:12:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:30:25.934 09:12:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:30:25.934 09:12:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:30:25.934 No valid GPT data, bailing 00:30:25.934 09:12:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:30:25.934 09:12:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:30:25.934 09:12:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:30:25.934 09:12:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:30:25.934 09:12:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:30:25.934 09:12:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:30:25.934 09:12:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:30:25.934 09:12:32 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:30:25.934 09:12:32 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:30:25.934 09:12:32 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:30:25.934 09:12:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:30:25.934 09:12:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:30:25.934 09:12:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:30:25.934 No valid GPT data, bailing 00:30:25.934 09:12:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:30:25.934 09:12:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:30:25.934 09:12:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:30:25.934 09:12:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:30:25.934 09:12:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:30:25.934 09:12:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:25.934 09:12:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:25.934 09:12:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:30:25.934 09:12:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:30:25.934 09:12:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:30:25.934 09:12:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:30:25.934 09:12:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:30:25.934 09:12:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:30:25.934 09:12:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:30:25.934 09:12:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:30:25.934 09:12:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:30:25.934 09:12:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:30:25.934 09:12:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 --hostid=a4705431-95c9-4bc1-9185-4a8233d2d7f5 -a 10.0.0.1 -t tcp -s 4420 00:30:25.934 00:30:25.934 Discovery Log Number of Records 2, Generation counter 2 00:30:25.934 =====Discovery Log Entry 0====== 00:30:25.934 trtype: tcp 00:30:25.934 adrfam: ipv4 00:30:25.934 subtype: current discovery subsystem 00:30:25.934 treq: not specified, sq flow control disable supported 00:30:25.934 portid: 1 00:30:25.934 trsvcid: 4420 00:30:25.934 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:30:25.934 traddr: 10.0.0.1 00:30:25.934 eflags: none 00:30:25.934 sectype: none 00:30:25.934 =====Discovery Log Entry 1====== 00:30:25.934 trtype: tcp 00:30:25.934 adrfam: ipv4 00:30:25.934 subtype: nvme subsystem 00:30:25.934 treq: not specified, sq flow control disable supported 00:30:25.934 portid: 1 00:30:25.934 trsvcid: 4420 00:30:25.934 subnqn: nqn.2016-06.io.spdk:testnqn 00:30:25.934 traddr: 10.0.0.1 00:30:25.934 eflags: none 00:30:25.934 sectype: none 00:30:25.934 09:12:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:30:25.934 09:12:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:30:25.934 09:12:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:30:25.934 09:12:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:30:25.934 09:12:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:30:25.934 09:12:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:30:25.934 09:12:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:30:25.934 09:12:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:30:25.934 09:12:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:30:25.934 09:12:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:25.934 09:12:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:30:25.934 09:12:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:25.934 09:12:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:30:25.934 09:12:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:25.934 09:12:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:30:25.934 09:12:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:25.934 09:12:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:30:25.934 09:12:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:25.934 09:12:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:26.193 09:12:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:26.193 09:12:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:29.479 Initializing NVMe Controllers 00:30:29.479 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:30:29.479 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:29.479 Initialization complete. Launching workers. 00:30:29.479 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 23988, failed: 0 00:30:29.479 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 23988, failed to submit 0 00:30:29.479 success 0, unsuccess 23988, failed 0 00:30:29.479 09:12:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:29.479 09:12:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:32.768 Initializing NVMe Controllers 00:30:32.768 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:30:32.768 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:32.768 Initialization complete. Launching workers. 00:30:32.768 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 57240, failed: 0 00:30:32.768 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 24656, failed to submit 32584 00:30:32.768 success 0, unsuccess 24656, failed 0 00:30:32.768 09:12:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:32.768 09:12:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:36.104 Initializing NVMe Controllers 00:30:36.104 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:30:36.104 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:36.104 Initialization complete. Launching workers. 00:30:36.104 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 64806, failed: 0 00:30:36.104 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 16220, failed to submit 48586 00:30:36.104 success 0, unsuccess 16220, failed 0 00:30:36.104 09:12:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:30:36.104 09:12:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:30:36.104 09:12:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:30:36.104 09:12:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:36.104 09:12:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:36.104 09:12:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:30:36.104 09:12:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:36.104 09:12:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:30:36.104 09:12:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:30:36.104 09:12:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:30:36.670 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:37.236 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:30:37.236 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:30:37.494 ************************************ 00:30:37.494 END TEST kernel_target_abort 00:30:37.494 ************************************ 00:30:37.494 00:30:37.494 real 0m12.845s 00:30:37.495 user 0m6.686s 00:30:37.495 sys 0m3.859s 00:30:37.495 09:12:44 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:37.495 09:12:44 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:37.495 09:12:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:30:37.495 09:12:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:30:37.495 09:12:44 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:37.495 09:12:44 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:30:37.495 09:12:44 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:37.495 09:12:44 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:30:37.495 09:12:44 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:37.495 09:12:44 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:37.495 rmmod nvme_tcp 00:30:37.495 rmmod nvme_fabrics 00:30:37.495 rmmod nvme_keyring 00:30:37.495 09:12:44 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:37.495 09:12:44 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:30:37.495 09:12:44 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:30:37.495 09:12:44 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 90217 ']' 00:30:37.495 09:12:44 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 90217 00:30:37.495 09:12:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 90217 ']' 00:30:37.495 09:12:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 90217 00:30:37.495 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (90217) - No such process 00:30:37.495 Process with pid 90217 is not found 00:30:37.495 09:12:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 90217 is not found' 00:30:37.495 09:12:44 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:30:37.495 09:12:44 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:30:37.752 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:37.752 Waiting for block devices as requested 00:30:38.011 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:30:38.011 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:30:38.011 09:12:45 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:38.011 09:12:45 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:38.011 09:12:45 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:38.011 09:12:45 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:38.011 09:12:45 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:38.011 09:12:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:38.011 09:12:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:38.011 09:12:45 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:30:38.011 ************************************ 00:30:38.011 END TEST nvmf_abort_qd_sizes 00:30:38.011 ************************************ 00:30:38.011 00:30:38.011 real 0m27.629s 00:30:38.011 user 0m52.600s 00:30:38.011 sys 0m7.567s 00:30:38.011 09:12:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:38.011 09:12:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:38.011 09:12:45 -- spdk/autotest.sh@299 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:30:38.011 09:12:45 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:38.011 09:12:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:38.011 09:12:45 -- common/autotest_common.sh@10 -- # set +x 00:30:38.270 ************************************ 00:30:38.270 START TEST keyring_file 00:30:38.270 ************************************ 00:30:38.270 09:12:45 keyring_file -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:30:38.270 * Looking for test storage... 00:30:38.270 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:30:38.270 09:12:45 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:30:38.270 09:12:45 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:38.270 09:12:45 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:30:38.270 09:12:45 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:38.270 09:12:45 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:38.270 09:12:45 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:38.270 09:12:45 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:38.270 09:12:45 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:38.270 09:12:45 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:38.270 09:12:45 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:38.270 09:12:45 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:38.270 09:12:45 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:38.270 09:12:45 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:38.270 09:12:45 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:30:38.270 09:12:45 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:30:38.270 09:12:45 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:38.270 09:12:45 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:38.270 09:12:45 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:30:38.270 09:12:45 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:38.270 09:12:45 keyring_file -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:38.270 09:12:45 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:38.270 09:12:45 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:38.270 09:12:45 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:38.270 09:12:45 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.270 09:12:45 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.270 09:12:45 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.270 09:12:45 keyring_file -- paths/export.sh@5 -- # export PATH 00:30:38.270 09:12:45 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.270 09:12:45 keyring_file -- nvmf/common.sh@47 -- # : 0 00:30:38.270 09:12:45 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:38.270 09:12:45 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:38.270 09:12:45 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:38.270 09:12:45 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:38.270 09:12:45 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:38.270 09:12:45 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:38.270 09:12:45 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:38.270 09:12:45 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:38.270 09:12:45 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:30:38.270 09:12:45 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:30:38.270 09:12:45 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:30:38.270 09:12:45 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:30:38.270 09:12:45 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:30:38.270 09:12:45 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:30:38.270 09:12:45 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:30:38.270 09:12:45 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:30:38.270 09:12:45 keyring_file -- keyring/common.sh@17 -- # name=key0 00:30:38.270 09:12:45 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:30:38.270 09:12:45 keyring_file -- keyring/common.sh@17 -- # digest=0 00:30:38.270 09:12:45 keyring_file -- keyring/common.sh@18 -- # mktemp 00:30:38.270 09:12:45 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.AMA2cgMauO 00:30:38.270 09:12:45 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:30:38.270 09:12:45 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:30:38.270 09:12:45 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:30:38.270 09:12:45 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:38.270 09:12:45 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:30:38.270 09:12:45 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:30:38.271 09:12:45 keyring_file -- nvmf/common.sh@705 -- # python - 00:30:38.271 09:12:45 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.AMA2cgMauO 00:30:38.271 09:12:45 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.AMA2cgMauO 00:30:38.271 09:12:45 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.AMA2cgMauO 00:30:38.271 09:12:45 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:30:38.271 09:12:45 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:30:38.271 09:12:45 keyring_file -- keyring/common.sh@17 -- # name=key1 00:30:38.271 09:12:45 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:30:38.271 09:12:45 keyring_file -- keyring/common.sh@17 -- # digest=0 00:30:38.271 09:12:45 keyring_file -- keyring/common.sh@18 -- # mktemp 00:30:38.271 09:12:45 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.7xTZq3OsMt 00:30:38.271 09:12:45 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:30:38.271 09:12:45 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:30:38.271 09:12:45 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:30:38.271 09:12:45 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:38.271 09:12:45 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:30:38.271 09:12:45 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:30:38.271 09:12:45 keyring_file -- nvmf/common.sh@705 -- # python - 00:30:38.271 09:12:45 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.7xTZq3OsMt 00:30:38.271 09:12:45 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.7xTZq3OsMt 00:30:38.271 09:12:45 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.7xTZq3OsMt 00:30:38.271 09:12:45 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:38.271 09:12:45 keyring_file -- keyring/file.sh@30 -- # tgtpid=91295 00:30:38.271 09:12:45 keyring_file -- keyring/file.sh@32 -- # waitforlisten 91295 00:30:38.271 09:12:45 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 91295 ']' 00:30:38.271 09:12:45 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:38.271 09:12:45 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:38.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:38.271 09:12:45 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:38.271 09:12:45 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:38.271 09:12:45 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:38.529 [2024-07-25 09:12:45.470540] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:30:38.529 [2024-07-25 09:12:45.470740] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91295 ] 00:30:38.830 [2024-07-25 09:12:45.648191] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:38.830 [2024-07-25 09:12:45.933119] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:39.088 [2024-07-25 09:12:46.137199] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:30:39.654 09:12:46 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:39.654 09:12:46 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:30:39.654 09:12:46 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:30:39.654 09:12:46 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:39.654 09:12:46 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:39.654 [2024-07-25 09:12:46.719546] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:39.654 null0 00:30:39.654 [2024-07-25 09:12:46.751547] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:30:39.654 [2024-07-25 09:12:46.751951] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:39.654 [2024-07-25 09:12:46.759529] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:30:39.654 09:12:46 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:39.654 09:12:46 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:30:39.654 09:12:46 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:30:39.654 09:12:46 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:30:39.654 09:12:46 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:30:39.654 09:12:46 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:39.654 09:12:46 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:30:39.911 09:12:46 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:39.911 09:12:46 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:30:39.911 09:12:46 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:39.911 09:12:46 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:39.911 [2024-07-25 09:12:46.771577] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:30:39.911 request: 00:30:39.911 { 00:30:39.911 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:30:39.911 "secure_channel": false, 00:30:39.911 "listen_address": { 00:30:39.911 "trtype": "tcp", 00:30:39.911 "traddr": "127.0.0.1", 00:30:39.911 "trsvcid": "4420" 00:30:39.911 }, 00:30:39.911 "method": "nvmf_subsystem_add_listener", 00:30:39.911 "req_id": 1 00:30:39.911 } 00:30:39.911 Got JSON-RPC error response 00:30:39.911 response: 00:30:39.911 { 00:30:39.911 "code": -32602, 00:30:39.911 "message": "Invalid parameters" 00:30:39.911 } 00:30:39.911 09:12:46 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:39.911 09:12:46 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:30:39.911 09:12:46 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:39.911 09:12:46 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:39.911 09:12:46 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:39.911 09:12:46 keyring_file -- keyring/file.sh@46 -- # bperfpid=91312 00:30:39.911 09:12:46 keyring_file -- keyring/file.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:30:39.911 09:12:46 keyring_file -- keyring/file.sh@48 -- # waitforlisten 91312 /var/tmp/bperf.sock 00:30:39.911 09:12:46 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 91312 ']' 00:30:39.911 09:12:46 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:39.911 09:12:46 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:39.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:39.911 09:12:46 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:39.911 09:12:46 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:39.911 09:12:46 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:39.911 [2024-07-25 09:12:46.903301] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:30:39.912 [2024-07-25 09:12:46.903571] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91312 ] 00:30:40.169 [2024-07-25 09:12:47.084676] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:40.428 [2024-07-25 09:12:47.306375] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:40.428 [2024-07-25 09:12:47.509211] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:30:40.685 09:12:47 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:40.685 09:12:47 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:30:40.685 09:12:47 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.AMA2cgMauO 00:30:40.685 09:12:47 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.AMA2cgMauO 00:30:41.248 09:12:48 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.7xTZq3OsMt 00:30:41.248 09:12:48 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.7xTZq3OsMt 00:30:41.248 09:12:48 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:30:41.248 09:12:48 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:30:41.248 09:12:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:41.248 09:12:48 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:41.248 09:12:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:41.505 09:12:48 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.AMA2cgMauO == \/\t\m\p\/\t\m\p\.\A\M\A\2\c\g\M\a\u\O ]] 00:30:41.505 09:12:48 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:30:41.505 09:12:48 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:30:41.505 09:12:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:41.505 09:12:48 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:41.505 09:12:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:42.069 09:12:48 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.7xTZq3OsMt == \/\t\m\p\/\t\m\p\.\7\x\T\Z\q\3\O\s\M\t ]] 00:30:42.070 09:12:48 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:30:42.070 09:12:48 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:42.070 09:12:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:42.070 09:12:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:42.070 09:12:48 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:42.070 09:12:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:42.070 09:12:49 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:30:42.070 09:12:49 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:30:42.070 09:12:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:42.070 09:12:49 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:42.070 09:12:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:42.070 09:12:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:42.070 09:12:49 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:42.326 09:12:49 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:30:42.326 09:12:49 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:42.326 09:12:49 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:42.583 [2024-07-25 09:12:49.611874] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:42.842 nvme0n1 00:30:42.842 09:12:49 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:30:42.842 09:12:49 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:42.842 09:12:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:42.842 09:12:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:42.842 09:12:49 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:42.842 09:12:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:43.099 09:12:49 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:30:43.099 09:12:49 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:30:43.099 09:12:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:43.099 09:12:49 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:43.099 09:12:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:43.099 09:12:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:43.099 09:12:49 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:43.357 09:12:50 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:30:43.357 09:12:50 keyring_file -- keyring/file.sh@62 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:43.357 Running I/O for 1 seconds... 00:30:44.288 00:30:44.288 Latency(us) 00:30:44.288 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:44.288 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:30:44.288 nvme0n1 : 1.01 8135.53 31.78 0.00 0.00 15654.50 8936.73 30742.34 00:30:44.288 =================================================================================================================== 00:30:44.288 Total : 8135.53 31.78 0.00 0.00 15654.50 8936.73 30742.34 00:30:44.288 0 00:30:44.545 09:12:51 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:30:44.545 09:12:51 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:30:44.545 09:12:51 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:30:44.545 09:12:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:44.545 09:12:51 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:44.545 09:12:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:44.545 09:12:51 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:44.545 09:12:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:45.111 09:12:51 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:30:45.111 09:12:51 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:30:45.111 09:12:51 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:45.111 09:12:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:45.111 09:12:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:45.111 09:12:51 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:45.111 09:12:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:45.111 09:12:52 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:30:45.111 09:12:52 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:45.111 09:12:52 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:30:45.111 09:12:52 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:45.111 09:12:52 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:30:45.111 09:12:52 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:45.111 09:12:52 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:30:45.111 09:12:52 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:45.111 09:12:52 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:45.111 09:12:52 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:45.369 [2024-07-25 09:12:52.384465] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spd[2024-07-25 09:12:52.384465] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000030000 (107): Transport endpoint is not connected 00:30:45.369 k_sock_recv() failed, errno 107: Transport endpoint is not connected 00:30:45.369 [2024-07-25 09:12:52.385437] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000030000 (9): Bad file descriptor 00:30:45.369 [2024-07-25 09:12:52.386431] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:45.369 [2024-07-25 09:12:52.386485] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:30:45.369 [2024-07-25 09:12:52.386519] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:45.369 request: 00:30:45.369 { 00:30:45.369 "name": "nvme0", 00:30:45.369 "trtype": "tcp", 00:30:45.369 "traddr": "127.0.0.1", 00:30:45.369 "adrfam": "ipv4", 00:30:45.369 "trsvcid": "4420", 00:30:45.369 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:45.369 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:45.369 "prchk_reftag": false, 00:30:45.369 "prchk_guard": false, 00:30:45.369 "hdgst": false, 00:30:45.369 "ddgst": false, 00:30:45.369 "psk": "key1", 00:30:45.369 "method": "bdev_nvme_attach_controller", 00:30:45.369 "req_id": 1 00:30:45.369 } 00:30:45.369 Got JSON-RPC error response 00:30:45.369 response: 00:30:45.369 { 00:30:45.369 "code": -5, 00:30:45.369 "message": "Input/output error" 00:30:45.369 } 00:30:45.369 09:12:52 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:30:45.369 09:12:52 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:45.369 09:12:52 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:45.369 09:12:52 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:45.369 09:12:52 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:30:45.369 09:12:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:45.369 09:12:52 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:45.369 09:12:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:45.369 09:12:52 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:45.369 09:12:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:45.627 09:12:52 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:30:45.627 09:12:52 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:30:45.627 09:12:52 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:45.627 09:12:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:45.627 09:12:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:45.627 09:12:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:45.627 09:12:52 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:45.886 09:12:52 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:30:45.886 09:12:52 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:30:45.886 09:12:52 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:30:46.145 09:12:53 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:30:46.145 09:12:53 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:30:46.405 09:12:53 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:30:46.405 09:12:53 keyring_file -- keyring/file.sh@77 -- # jq length 00:30:46.405 09:12:53 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:46.667 09:12:53 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:30:46.667 09:12:53 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.AMA2cgMauO 00:30:46.667 09:12:53 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.AMA2cgMauO 00:30:46.667 09:12:53 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:30:46.667 09:12:53 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.AMA2cgMauO 00:30:46.667 09:12:53 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:30:46.667 09:12:53 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:46.667 09:12:53 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:30:46.667 09:12:53 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:46.667 09:12:53 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.AMA2cgMauO 00:30:46.667 09:12:53 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.AMA2cgMauO 00:30:46.925 [2024-07-25 09:12:53.892475] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.AMA2cgMauO': 0100660 00:30:46.925 [2024-07-25 09:12:53.892551] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:30:46.925 request: 00:30:46.925 { 00:30:46.925 "name": "key0", 00:30:46.925 "path": "/tmp/tmp.AMA2cgMauO", 00:30:46.925 "method": "keyring_file_add_key", 00:30:46.925 "req_id": 1 00:30:46.925 } 00:30:46.925 Got JSON-RPC error response 00:30:46.925 response: 00:30:46.925 { 00:30:46.925 "code": -1, 00:30:46.925 "message": "Operation not permitted" 00:30:46.925 } 00:30:46.925 09:12:53 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:30:46.925 09:12:53 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:46.925 09:12:53 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:46.925 09:12:53 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:46.925 09:12:53 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.AMA2cgMauO 00:30:46.926 09:12:53 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.AMA2cgMauO 00:30:46.926 09:12:53 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.AMA2cgMauO 00:30:47.183 09:12:54 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.AMA2cgMauO 00:30:47.183 09:12:54 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:30:47.183 09:12:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:47.183 09:12:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:47.183 09:12:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:47.183 09:12:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:47.183 09:12:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:47.442 09:12:54 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:30:47.442 09:12:54 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:47.442 09:12:54 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:30:47.442 09:12:54 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:47.442 09:12:54 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:30:47.442 09:12:54 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:47.442 09:12:54 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:30:47.442 09:12:54 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:47.442 09:12:54 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:47.442 09:12:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:47.720 [2024-07-25 09:12:54.652731] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.AMA2cgMauO': No such file or directory 00:30:47.720 [2024-07-25 09:12:54.652811] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:30:47.720 [2024-07-25 09:12:54.652889] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:30:47.720 [2024-07-25 09:12:54.652904] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:47.720 [2024-07-25 09:12:54.652920] bdev_nvme.c:6296:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:30:47.720 request: 00:30:47.720 { 00:30:47.720 "name": "nvme0", 00:30:47.720 "trtype": "tcp", 00:30:47.720 "traddr": "127.0.0.1", 00:30:47.720 "adrfam": "ipv4", 00:30:47.720 "trsvcid": "4420", 00:30:47.721 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:47.721 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:47.721 "prchk_reftag": false, 00:30:47.721 "prchk_guard": false, 00:30:47.721 "hdgst": false, 00:30:47.721 "ddgst": false, 00:30:47.721 "psk": "key0", 00:30:47.721 "method": "bdev_nvme_attach_controller", 00:30:47.721 "req_id": 1 00:30:47.721 } 00:30:47.721 Got JSON-RPC error response 00:30:47.721 response: 00:30:47.721 { 00:30:47.721 "code": -19, 00:30:47.721 "message": "No such device" 00:30:47.721 } 00:30:47.721 09:12:54 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:30:47.721 09:12:54 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:47.721 09:12:54 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:47.721 09:12:54 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:47.721 09:12:54 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:30:47.721 09:12:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:30:48.009 09:12:54 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:30:48.009 09:12:54 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:30:48.009 09:12:54 keyring_file -- keyring/common.sh@17 -- # name=key0 00:30:48.009 09:12:54 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:30:48.009 09:12:54 keyring_file -- keyring/common.sh@17 -- # digest=0 00:30:48.009 09:12:54 keyring_file -- keyring/common.sh@18 -- # mktemp 00:30:48.009 09:12:54 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Jv90Rp4smi 00:30:48.009 09:12:54 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:30:48.009 09:12:54 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:30:48.009 09:12:54 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:30:48.009 09:12:54 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:48.009 09:12:54 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:30:48.009 09:12:54 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:30:48.009 09:12:54 keyring_file -- nvmf/common.sh@705 -- # python - 00:30:48.009 09:12:54 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Jv90Rp4smi 00:30:48.009 09:12:54 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Jv90Rp4smi 00:30:48.009 09:12:54 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.Jv90Rp4smi 00:30:48.009 09:12:54 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Jv90Rp4smi 00:30:48.009 09:12:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Jv90Rp4smi 00:30:48.268 09:12:55 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:48.268 09:12:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:48.527 nvme0n1 00:30:48.527 09:12:55 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:30:48.527 09:12:55 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:48.527 09:12:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:48.527 09:12:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:48.527 09:12:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:48.527 09:12:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:48.785 09:12:55 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:30:48.785 09:12:55 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:30:48.785 09:12:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:30:49.043 09:12:56 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:30:49.043 09:12:56 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:30:49.043 09:12:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:49.043 09:12:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:49.043 09:12:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:49.302 09:12:56 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:30:49.302 09:12:56 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:30:49.302 09:12:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:49.302 09:12:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:49.302 09:12:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:49.302 09:12:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:49.302 09:12:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:49.560 09:12:56 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:30:49.560 09:12:56 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:30:49.560 09:12:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:30:49.818 09:12:56 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:30:49.818 09:12:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:49.818 09:12:56 keyring_file -- keyring/file.sh@104 -- # jq length 00:30:50.077 09:12:57 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:30:50.077 09:12:57 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Jv90Rp4smi 00:30:50.077 09:12:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Jv90Rp4smi 00:30:50.335 09:12:57 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.7xTZq3OsMt 00:30:50.335 09:12:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.7xTZq3OsMt 00:30:50.593 09:12:57 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:50.594 09:12:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:50.853 nvme0n1 00:30:50.853 09:12:57 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:30:50.853 09:12:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:30:51.111 09:12:58 keyring_file -- keyring/file.sh@112 -- # config='{ 00:30:51.111 "subsystems": [ 00:30:51.111 { 00:30:51.111 "subsystem": "keyring", 00:30:51.111 "config": [ 00:30:51.111 { 00:30:51.111 "method": "keyring_file_add_key", 00:30:51.111 "params": { 00:30:51.111 "name": "key0", 00:30:51.111 "path": "/tmp/tmp.Jv90Rp4smi" 00:30:51.111 } 00:30:51.111 }, 00:30:51.111 { 00:30:51.111 "method": "keyring_file_add_key", 00:30:51.111 "params": { 00:30:51.111 "name": "key1", 00:30:51.111 "path": "/tmp/tmp.7xTZq3OsMt" 00:30:51.111 } 00:30:51.112 } 00:30:51.112 ] 00:30:51.112 }, 00:30:51.112 { 00:30:51.112 "subsystem": "iobuf", 00:30:51.112 "config": [ 00:30:51.112 { 00:30:51.112 "method": "iobuf_set_options", 00:30:51.112 "params": { 00:30:51.112 "small_pool_count": 8192, 00:30:51.112 "large_pool_count": 1024, 00:30:51.112 "small_bufsize": 8192, 00:30:51.112 "large_bufsize": 135168 00:30:51.112 } 00:30:51.112 } 00:30:51.112 ] 00:30:51.112 }, 00:30:51.112 { 00:30:51.112 "subsystem": "sock", 00:30:51.112 "config": [ 00:30:51.112 { 00:30:51.112 "method": "sock_set_default_impl", 00:30:51.112 "params": { 00:30:51.112 "impl_name": "uring" 00:30:51.112 } 00:30:51.112 }, 00:30:51.112 { 00:30:51.112 "method": "sock_impl_set_options", 00:30:51.112 "params": { 00:30:51.112 "impl_name": "ssl", 00:30:51.112 "recv_buf_size": 4096, 00:30:51.112 "send_buf_size": 4096, 00:30:51.112 "enable_recv_pipe": true, 00:30:51.112 "enable_quickack": false, 00:30:51.112 "enable_placement_id": 0, 00:30:51.112 "enable_zerocopy_send_server": true, 00:30:51.112 "enable_zerocopy_send_client": false, 00:30:51.112 "zerocopy_threshold": 0, 00:30:51.112 "tls_version": 0, 00:30:51.112 "enable_ktls": false 00:30:51.112 } 00:30:51.112 }, 00:30:51.112 { 00:30:51.112 "method": "sock_impl_set_options", 00:30:51.112 "params": { 00:30:51.112 "impl_name": "posix", 00:30:51.112 "recv_buf_size": 2097152, 00:30:51.112 "send_buf_size": 2097152, 00:30:51.112 "enable_recv_pipe": true, 00:30:51.112 "enable_quickack": false, 00:30:51.112 "enable_placement_id": 0, 00:30:51.112 "enable_zerocopy_send_server": true, 00:30:51.112 "enable_zerocopy_send_client": false, 00:30:51.112 "zerocopy_threshold": 0, 00:30:51.112 "tls_version": 0, 00:30:51.112 "enable_ktls": false 00:30:51.112 } 00:30:51.112 }, 00:30:51.112 { 00:30:51.112 "method": "sock_impl_set_options", 00:30:51.112 "params": { 00:30:51.112 "impl_name": "uring", 00:30:51.112 "recv_buf_size": 2097152, 00:30:51.112 "send_buf_size": 2097152, 00:30:51.112 "enable_recv_pipe": true, 00:30:51.112 "enable_quickack": false, 00:30:51.112 "enable_placement_id": 0, 00:30:51.112 "enable_zerocopy_send_server": false, 00:30:51.112 "enable_zerocopy_send_client": false, 00:30:51.112 "zerocopy_threshold": 0, 00:30:51.112 "tls_version": 0, 00:30:51.112 "enable_ktls": false 00:30:51.112 } 00:30:51.112 } 00:30:51.112 ] 00:30:51.112 }, 00:30:51.112 { 00:30:51.112 "subsystem": "vmd", 00:30:51.112 "config": [] 00:30:51.112 }, 00:30:51.112 { 00:30:51.112 "subsystem": "accel", 00:30:51.112 "config": [ 00:30:51.112 { 00:30:51.112 "method": "accel_set_options", 00:30:51.112 "params": { 00:30:51.112 "small_cache_size": 128, 00:30:51.112 "large_cache_size": 16, 00:30:51.112 "task_count": 2048, 00:30:51.112 "sequence_count": 2048, 00:30:51.112 "buf_count": 2048 00:30:51.112 } 00:30:51.112 } 00:30:51.112 ] 00:30:51.112 }, 00:30:51.112 { 00:30:51.112 "subsystem": "bdev", 00:30:51.112 "config": [ 00:30:51.112 { 00:30:51.112 "method": "bdev_set_options", 00:30:51.112 "params": { 00:30:51.112 "bdev_io_pool_size": 65535, 00:30:51.112 "bdev_io_cache_size": 256, 00:30:51.112 "bdev_auto_examine": true, 00:30:51.112 "iobuf_small_cache_size": 128, 00:30:51.112 "iobuf_large_cache_size": 16 00:30:51.112 } 00:30:51.112 }, 00:30:51.112 { 00:30:51.112 "method": "bdev_raid_set_options", 00:30:51.112 "params": { 00:30:51.112 "process_window_size_kb": 1024, 00:30:51.112 "process_max_bandwidth_mb_sec": 0 00:30:51.112 } 00:30:51.112 }, 00:30:51.112 { 00:30:51.112 "method": "bdev_iscsi_set_options", 00:30:51.112 "params": { 00:30:51.112 "timeout_sec": 30 00:30:51.112 } 00:30:51.112 }, 00:30:51.112 { 00:30:51.112 "method": "bdev_nvme_set_options", 00:30:51.112 "params": { 00:30:51.112 "action_on_timeout": "none", 00:30:51.112 "timeout_us": 0, 00:30:51.112 "timeout_admin_us": 0, 00:30:51.112 "keep_alive_timeout_ms": 10000, 00:30:51.112 "arbitration_burst": 0, 00:30:51.112 "low_priority_weight": 0, 00:30:51.112 "medium_priority_weight": 0, 00:30:51.112 "high_priority_weight": 0, 00:30:51.112 "nvme_adminq_poll_period_us": 10000, 00:30:51.112 "nvme_ioq_poll_period_us": 0, 00:30:51.112 "io_queue_requests": 512, 00:30:51.112 "delay_cmd_submit": true, 00:30:51.112 "transport_retry_count": 4, 00:30:51.112 "bdev_retry_count": 3, 00:30:51.112 "transport_ack_timeout": 0, 00:30:51.112 "ctrlr_loss_timeout_sec": 0, 00:30:51.112 "reconnect_delay_sec": 0, 00:30:51.112 "fast_io_fail_timeout_sec": 0, 00:30:51.112 "disable_auto_failback": false, 00:30:51.112 "generate_uuids": false, 00:30:51.112 "transport_tos": 0, 00:30:51.112 "nvme_error_stat": false, 00:30:51.112 "rdma_srq_size": 0, 00:30:51.112 "io_path_stat": false, 00:30:51.112 "allow_accel_sequence": false, 00:30:51.112 "rdma_max_cq_size": 0, 00:30:51.112 "rdma_cm_event_timeout_ms": 0, 00:30:51.112 "dhchap_digests": [ 00:30:51.112 "sha256", 00:30:51.112 "sha384", 00:30:51.112 "sha512" 00:30:51.112 ], 00:30:51.112 "dhchap_dhgroups": [ 00:30:51.112 "null", 00:30:51.112 "ffdhe2048", 00:30:51.112 "ffdhe3072", 00:30:51.112 "ffdhe4096", 00:30:51.112 "ffdhe6144", 00:30:51.112 "ffdhe8192" 00:30:51.112 ] 00:30:51.112 } 00:30:51.112 }, 00:30:51.112 { 00:30:51.112 "method": "bdev_nvme_attach_controller", 00:30:51.112 "params": { 00:30:51.112 "name": "nvme0", 00:30:51.112 "trtype": "TCP", 00:30:51.112 "adrfam": "IPv4", 00:30:51.112 "traddr": "127.0.0.1", 00:30:51.112 "trsvcid": "4420", 00:30:51.112 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:51.112 "prchk_reftag": false, 00:30:51.112 "prchk_guard": false, 00:30:51.112 "ctrlr_loss_timeout_sec": 0, 00:30:51.112 "reconnect_delay_sec": 0, 00:30:51.112 "fast_io_fail_timeout_sec": 0, 00:30:51.112 "psk": "key0", 00:30:51.112 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:51.112 "hdgst": false, 00:30:51.112 "ddgst": false 00:30:51.112 } 00:30:51.112 }, 00:30:51.112 { 00:30:51.112 "method": "bdev_nvme_set_hotplug", 00:30:51.112 "params": { 00:30:51.112 "period_us": 100000, 00:30:51.112 "enable": false 00:30:51.112 } 00:30:51.112 }, 00:30:51.112 { 00:30:51.112 "method": "bdev_wait_for_examine" 00:30:51.112 } 00:30:51.112 ] 00:30:51.112 }, 00:30:51.112 { 00:30:51.112 "subsystem": "nbd", 00:30:51.112 "config": [] 00:30:51.112 } 00:30:51.112 ] 00:30:51.112 }' 00:30:51.112 09:12:58 keyring_file -- keyring/file.sh@114 -- # killprocess 91312 00:30:51.112 09:12:58 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 91312 ']' 00:30:51.112 09:12:58 keyring_file -- common/autotest_common.sh@954 -- # kill -0 91312 00:30:51.113 09:12:58 keyring_file -- common/autotest_common.sh@955 -- # uname 00:30:51.113 09:12:58 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:51.113 09:12:58 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91312 00:30:51.113 killing process with pid 91312 00:30:51.113 Received shutdown signal, test time was about 1.000000 seconds 00:30:51.113 00:30:51.113 Latency(us) 00:30:51.113 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:51.113 =================================================================================================================== 00:30:51.113 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:51.113 09:12:58 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:51.113 09:12:58 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:51.113 09:12:58 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91312' 00:30:51.113 09:12:58 keyring_file -- common/autotest_common.sh@969 -- # kill 91312 00:30:51.113 09:12:58 keyring_file -- common/autotest_common.sh@974 -- # wait 91312 00:30:52.488 09:12:59 keyring_file -- keyring/file.sh@117 -- # bperfpid=91567 00:30:52.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:52.488 09:12:59 keyring_file -- keyring/file.sh@119 -- # waitforlisten 91567 /var/tmp/bperf.sock 00:30:52.488 09:12:59 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 91567 ']' 00:30:52.488 09:12:59 keyring_file -- keyring/file.sh@115 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:30:52.488 09:12:59 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:30:52.488 "subsystems": [ 00:30:52.488 { 00:30:52.488 "subsystem": "keyring", 00:30:52.488 "config": [ 00:30:52.488 { 00:30:52.488 "method": "keyring_file_add_key", 00:30:52.488 "params": { 00:30:52.488 "name": "key0", 00:30:52.488 "path": "/tmp/tmp.Jv90Rp4smi" 00:30:52.488 } 00:30:52.488 }, 00:30:52.488 { 00:30:52.488 "method": "keyring_file_add_key", 00:30:52.488 "params": { 00:30:52.488 "name": "key1", 00:30:52.488 "path": "/tmp/tmp.7xTZq3OsMt" 00:30:52.488 } 00:30:52.488 } 00:30:52.488 ] 00:30:52.488 }, 00:30:52.488 { 00:30:52.488 "subsystem": "iobuf", 00:30:52.488 "config": [ 00:30:52.488 { 00:30:52.488 "method": "iobuf_set_options", 00:30:52.488 "params": { 00:30:52.488 "small_pool_count": 8192, 00:30:52.488 "large_pool_count": 1024, 00:30:52.488 "small_bufsize": 8192, 00:30:52.488 "large_bufsize": 135168 00:30:52.488 } 00:30:52.488 } 00:30:52.488 ] 00:30:52.488 }, 00:30:52.488 { 00:30:52.488 "subsystem": "sock", 00:30:52.488 "config": [ 00:30:52.488 { 00:30:52.488 "method": "sock_set_default_impl", 00:30:52.488 "params": { 00:30:52.488 "impl_name": "uring" 00:30:52.488 } 00:30:52.488 }, 00:30:52.488 { 00:30:52.488 "method": "sock_impl_set_options", 00:30:52.488 "params": { 00:30:52.488 "impl_name": "ssl", 00:30:52.488 "recv_buf_size": 4096, 00:30:52.488 "send_buf_size": 4096, 00:30:52.489 "enable_recv_pipe": true, 00:30:52.489 "enable_quickack": false, 00:30:52.489 "enable_placement_id": 0, 00:30:52.489 "enable_zerocopy_send_server": true, 00:30:52.489 "enable_zerocopy_send_client": false, 00:30:52.489 "zerocopy_threshold": 0, 00:30:52.489 "tls_version": 0, 00:30:52.489 "enable_ktls": false 00:30:52.489 } 00:30:52.489 }, 00:30:52.489 { 00:30:52.489 "method": "sock_impl_set_options", 00:30:52.489 "params": { 00:30:52.489 "impl_name": "posix", 00:30:52.489 "recv_buf_size": 2097152, 00:30:52.489 "send_buf_size": 2097152, 00:30:52.489 "enable_recv_pipe": true, 00:30:52.489 "enable_quickack": false, 00:30:52.489 "enable_placement_id": 0, 00:30:52.489 "enable_zerocopy_send_server": true, 00:30:52.489 "enable_zerocopy_send_client": false, 00:30:52.489 "zerocopy_threshold": 0, 00:30:52.489 "tls_version": 0, 00:30:52.489 "enable_ktls": false 00:30:52.489 } 00:30:52.489 }, 00:30:52.489 { 00:30:52.489 "method": "sock_impl_set_options", 00:30:52.489 "params": { 00:30:52.489 "impl_name": "uring", 00:30:52.489 "recv_buf_size": 2097152, 00:30:52.489 "send_buf_size": 2097152, 00:30:52.489 "enable_recv_pipe": true, 00:30:52.489 "enable_quickack": false, 00:30:52.489 "enable_placement_id": 0, 00:30:52.489 "enable_zerocopy_send_server": false, 00:30:52.489 "enable_zerocopy_send_client": false, 00:30:52.489 "zerocopy_threshold": 0, 00:30:52.489 "tls_version": 0, 00:30:52.489 "enable_ktls": false 00:30:52.489 } 00:30:52.489 } 00:30:52.489 ] 00:30:52.489 }, 00:30:52.489 { 00:30:52.489 "subsystem": "vmd", 00:30:52.489 "config": [] 00:30:52.489 }, 00:30:52.489 { 00:30:52.489 "subsystem": "accel", 00:30:52.489 "config": [ 00:30:52.489 { 00:30:52.489 "method": "accel_set_options", 00:30:52.489 "params": { 00:30:52.489 "small_cache_size": 128, 00:30:52.489 "large_cache_size": 16, 00:30:52.489 "task_count": 2048, 00:30:52.489 "sequence_count": 2048, 00:30:52.489 "buf_count": 2048 00:30:52.489 } 00:30:52.489 } 00:30:52.489 ] 00:30:52.489 }, 00:30:52.489 { 00:30:52.489 "subsystem": "bdev", 00:30:52.489 "config": [ 00:30:52.489 { 00:30:52.489 "method": "bdev_set_options", 00:30:52.489 "params": { 00:30:52.489 "bdev_io_pool_size": 65535, 00:30:52.489 "bdev_io_cache_size": 256, 00:30:52.489 "bdev_auto_examine": true, 00:30:52.489 "iobuf_small_cache_size": 128, 00:30:52.489 "iobuf_large_cache_size": 16 00:30:52.489 } 00:30:52.489 }, 00:30:52.489 { 00:30:52.489 "method": "bdev_raid_set_options", 00:30:52.489 "params": { 00:30:52.489 "process_window_size_kb": 1024, 00:30:52.489 "process_max_bandwidth_mb_sec": 0 00:30:52.489 } 00:30:52.489 }, 00:30:52.489 { 00:30:52.489 "method": "bdev_iscsi_set_options", 00:30:52.489 "params": { 00:30:52.489 "timeout_sec": 30 00:30:52.489 } 00:30:52.489 }, 00:30:52.489 { 00:30:52.489 "method": "bdev_nvme_set_options", 00:30:52.489 "params": { 00:30:52.489 "action_on_timeout": "none", 00:30:52.489 "timeout_us": 0, 00:30:52.489 "timeout_admin_us": 0, 00:30:52.489 "keep_alive_timeout_ms": 10000, 00:30:52.489 "arbitration_burst": 0, 00:30:52.489 "low_priority_weight": 0, 00:30:52.489 "medium_priority_weight": 0, 00:30:52.489 "high_priority_weight": 0, 00:30:52.489 "nvme_adminq_poll_period_us": 10000, 00:30:52.489 "nvme_ioq_poll_period_us": 0, 00:30:52.489 09:12:59 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:52.489 "io_queue_requests": 512, 00:30:52.489 "delay_cmd_submit": true, 00:30:52.489 "transport_retry_count": 4, 00:30:52.489 "bdev_retry_count": 3, 00:30:52.489 "transport_ack_timeout": 0, 00:30:52.489 "ctrlr_loss_timeout_sec": 0, 00:30:52.489 "reconnect_delay_sec": 0, 00:30:52.489 "fast_io_fail_timeout_sec": 0, 00:30:52.489 "disable_auto_failback": false, 00:30:52.489 "generate_uuids": false, 00:30:52.489 "transport_tos": 0, 00:30:52.489 "nvme_error_stat": false, 00:30:52.489 "rdma_srq_size": 0, 00:30:52.489 "io_path_stat": false, 00:30:52.489 "allow_accel_sequence": false, 00:30:52.489 "rdma_max_cq_size": 0, 00:30:52.489 "rdma_cm_event_timeout_ms": 0, 00:30:52.489 "dhchap_digests": [ 00:30:52.489 "sha256", 00:30:52.489 "sha384", 00:30:52.489 "sha512" 00:30:52.489 ], 00:30:52.489 "dhchap_dhgroups": [ 00:30:52.489 "null", 00:30:52.489 "ffdhe2048", 00:30:52.489 "ffdhe3072", 00:30:52.489 "ffdhe4096", 00:30:52.489 "ffdhe6144", 00:30:52.489 "ffdhe8192" 00:30:52.489 ] 00:30:52.489 } 00:30:52.489 }, 00:30:52.489 { 00:30:52.489 "method": "bdev_nvme_attach_controller", 00:30:52.489 "params": { 00:30:52.489 "name": "nvme0", 00:30:52.489 "trtype": "TCP", 00:30:52.489 "adrfam": "IPv4", 00:30:52.489 "traddr": "127.0.0.1", 00:30:52.489 "trsvcid": "4420", 00:30:52.489 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:52.489 "prchk_reftag": false, 00:30:52.489 "prchk_guard": false, 00:30:52.489 "ctrlr_loss_timeout_sec": 0, 00:30:52.489 "reconnect_delay_sec": 0, 00:30:52.489 "fast_io_fail_timeout_sec": 0, 00:30:52.489 "psk": "key0", 00:30:52.489 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:52.489 "hdgst": false, 00:30:52.489 "ddgst": false 00:30:52.489 } 00:30:52.489 }, 00:30:52.489 { 00:30:52.489 "method": "bdev_nvme_set_hotplug", 00:30:52.489 "params": { 00:30:52.489 "period_us": 100000, 00:30:52.489 "enable": false 00:30:52.489 } 00:30:52.489 }, 00:30:52.489 { 00:30:52.489 "method": "bdev_wait_for_examine" 00:30:52.489 } 00:30:52.489 ] 00:30:52.489 }, 00:30:52.489 { 00:30:52.489 "subsystem": "nbd", 00:30:52.489 "config": [] 00:30:52.489 } 00:30:52.489 ] 00:30:52.489 }' 00:30:52.489 09:12:59 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:52.489 09:12:59 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:52.489 09:12:59 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:52.489 09:12:59 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:52.489 [2024-07-25 09:12:59.340945] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:30:52.489 [2024-07-25 09:12:59.341166] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91567 ] 00:30:52.489 [2024-07-25 09:12:59.508577] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:52.748 [2024-07-25 09:12:59.749557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:53.006 [2024-07-25 09:13:00.029880] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:30:53.264 [2024-07-25 09:13:00.155721] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:53.264 09:13:00 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:53.264 09:13:00 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:30:53.264 09:13:00 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:30:53.264 09:13:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:53.264 09:13:00 keyring_file -- keyring/file.sh@120 -- # jq length 00:30:53.522 09:13:00 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:30:53.522 09:13:00 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:30:53.522 09:13:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:53.522 09:13:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:53.522 09:13:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:53.522 09:13:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:53.522 09:13:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:53.779 09:13:00 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:30:53.779 09:13:00 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:30:53.779 09:13:00 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:53.779 09:13:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:53.779 09:13:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:53.779 09:13:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:53.779 09:13:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:54.038 09:13:00 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:30:54.038 09:13:00 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:30:54.038 09:13:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:30:54.038 09:13:00 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:30:54.296 09:13:01 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:30:54.296 09:13:01 keyring_file -- keyring/file.sh@1 -- # cleanup 00:30:54.296 09:13:01 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.Jv90Rp4smi /tmp/tmp.7xTZq3OsMt 00:30:54.296 09:13:01 keyring_file -- keyring/file.sh@20 -- # killprocess 91567 00:30:54.296 09:13:01 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 91567 ']' 00:30:54.296 09:13:01 keyring_file -- common/autotest_common.sh@954 -- # kill -0 91567 00:30:54.296 09:13:01 keyring_file -- common/autotest_common.sh@955 -- # uname 00:30:54.296 09:13:01 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:54.296 09:13:01 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91567 00:30:54.296 09:13:01 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:54.296 09:13:01 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:54.296 killing process with pid 91567 00:30:54.296 09:13:01 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91567' 00:30:54.296 Received shutdown signal, test time was about 1.000000 seconds 00:30:54.296 00:30:54.296 Latency(us) 00:30:54.296 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:54.296 =================================================================================================================== 00:30:54.296 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:30:54.296 09:13:01 keyring_file -- common/autotest_common.sh@969 -- # kill 91567 00:30:54.296 09:13:01 keyring_file -- common/autotest_common.sh@974 -- # wait 91567 00:30:55.672 09:13:02 keyring_file -- keyring/file.sh@21 -- # killprocess 91295 00:30:55.672 09:13:02 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 91295 ']' 00:30:55.672 09:13:02 keyring_file -- common/autotest_common.sh@954 -- # kill -0 91295 00:30:55.672 09:13:02 keyring_file -- common/autotest_common.sh@955 -- # uname 00:30:55.672 09:13:02 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:55.672 09:13:02 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91295 00:30:55.672 09:13:02 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:55.672 09:13:02 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:55.672 killing process with pid 91295 00:30:55.672 09:13:02 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91295' 00:30:55.672 09:13:02 keyring_file -- common/autotest_common.sh@969 -- # kill 91295 00:30:55.672 [2024-07-25 09:13:02.534683] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:30:55.672 09:13:02 keyring_file -- common/autotest_common.sh@974 -- # wait 91295 00:30:57.632 00:30:57.632 real 0m19.540s 00:30:57.632 user 0m44.110s 00:30:57.632 sys 0m3.425s 00:30:57.632 09:13:04 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:57.632 09:13:04 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:57.632 ************************************ 00:30:57.632 END TEST keyring_file 00:30:57.632 ************************************ 00:30:57.632 09:13:04 -- spdk/autotest.sh@300 -- # [[ y == y ]] 00:30:57.632 09:13:04 -- spdk/autotest.sh@301 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:30:57.632 09:13:04 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:57.632 09:13:04 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:57.632 09:13:04 -- common/autotest_common.sh@10 -- # set +x 00:30:57.632 ************************************ 00:30:57.632 START TEST keyring_linux 00:30:57.632 ************************************ 00:30:57.632 09:13:04 keyring_linux -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:30:57.892 * Looking for test storage... 00:30:57.892 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:30:57.892 09:13:04 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:30:57.892 09:13:04 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:57.892 09:13:04 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:30:57.892 09:13:04 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:57.892 09:13:04 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:57.892 09:13:04 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:57.892 09:13:04 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:57.892 09:13:04 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:57.892 09:13:04 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:57.892 09:13:04 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:57.892 09:13:04 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:57.892 09:13:04 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:57.892 09:13:04 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:57.892 09:13:04 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:30:57.892 09:13:04 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=a4705431-95c9-4bc1-9185-4a8233d2d7f5 00:30:57.892 09:13:04 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:57.892 09:13:04 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:57.892 09:13:04 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:30:57.892 09:13:04 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:57.892 09:13:04 keyring_linux -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:57.892 09:13:04 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:57.892 09:13:04 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:57.892 09:13:04 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:57.892 09:13:04 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:57.892 09:13:04 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:57.892 09:13:04 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:57.892 09:13:04 keyring_linux -- paths/export.sh@5 -- # export PATH 00:30:57.892 09:13:04 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:57.892 09:13:04 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:30:57.892 09:13:04 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:57.892 09:13:04 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:57.892 09:13:04 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:57.892 09:13:04 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:57.892 09:13:04 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:57.892 09:13:04 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:57.892 09:13:04 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:57.892 09:13:04 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:57.892 09:13:04 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:30:57.892 09:13:04 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:30:57.892 09:13:04 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:30:57.892 09:13:04 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:30:57.892 09:13:04 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:30:57.892 09:13:04 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:30:57.892 09:13:04 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:30:57.892 09:13:04 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:30:57.892 09:13:04 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:30:57.892 09:13:04 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:30:57.892 09:13:04 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:30:57.892 09:13:04 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:30:57.892 09:13:04 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:30:57.892 09:13:04 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:30:57.892 09:13:04 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:30:57.892 09:13:04 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:57.892 09:13:04 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:30:57.892 09:13:04 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:30:57.892 09:13:04 keyring_linux -- nvmf/common.sh@705 -- # python - 00:30:57.892 09:13:04 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:30:57.892 09:13:04 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:30:57.892 /tmp/:spdk-test:key0 00:30:57.892 09:13:04 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:30:57.892 09:13:04 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:30:57.892 09:13:04 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:30:57.892 09:13:04 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:30:57.892 09:13:04 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:30:57.892 09:13:04 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:30:57.892 09:13:04 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:30:57.892 09:13:04 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:30:57.892 09:13:04 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:30:57.892 09:13:04 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:57.892 09:13:04 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:30:57.892 09:13:04 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:30:57.892 09:13:04 keyring_linux -- nvmf/common.sh@705 -- # python - 00:30:57.892 09:13:04 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:30:57.892 /tmp/:spdk-test:key1 00:30:57.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:57.892 09:13:04 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:30:57.892 09:13:04 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=91716 00:30:57.892 09:13:04 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:57.892 09:13:04 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 91716 00:30:57.892 09:13:04 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 91716 ']' 00:30:57.892 09:13:04 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:57.892 09:13:04 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:57.892 09:13:04 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:57.893 09:13:04 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:57.893 09:13:04 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:30:58.151 [2024-07-25 09:13:05.067239] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:30:58.151 [2024-07-25 09:13:05.067713] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91716 ] 00:30:58.151 [2024-07-25 09:13:05.242970] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:58.409 [2024-07-25 09:13:05.450718] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:58.667 [2024-07-25 09:13:05.650583] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:30:59.235 09:13:06 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:59.235 09:13:06 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:30:59.235 09:13:06 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:30:59.235 09:13:06 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.235 09:13:06 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:30:59.235 [2024-07-25 09:13:06.258735] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:59.235 null0 00:30:59.235 [2024-07-25 09:13:06.290666] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:30:59.235 [2024-07-25 09:13:06.291023] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:59.235 09:13:06 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.235 09:13:06 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:30:59.235 251096795 00:30:59.235 09:13:06 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:30:59.235 634849576 00:30:59.235 09:13:06 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=91734 00:30:59.235 09:13:06 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:30:59.235 09:13:06 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 91734 /var/tmp/bperf.sock 00:30:59.235 09:13:06 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 91734 ']' 00:30:59.235 09:13:06 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:59.235 09:13:06 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:59.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:59.235 09:13:06 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:59.235 09:13:06 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:59.235 09:13:06 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:30:59.495 [2024-07-25 09:13:06.427944] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:30:59.495 [2024-07-25 09:13:06.428134] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91734 ] 00:30:59.495 [2024-07-25 09:13:06.605907] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:59.753 [2024-07-25 09:13:06.864209] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:00.320 09:13:07 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:00.320 09:13:07 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:31:00.320 09:13:07 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:31:00.320 09:13:07 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:31:00.578 09:13:07 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:31:00.578 09:13:07 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:00.835 [2024-07-25 09:13:07.920573] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:31:01.093 09:13:08 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:31:01.093 09:13:08 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:31:01.351 [2024-07-25 09:13:08.295540] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:01.351 nvme0n1 00:31:01.351 09:13:08 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:31:01.351 09:13:08 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:31:01.351 09:13:08 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:31:01.351 09:13:08 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:31:01.351 09:13:08 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:01.351 09:13:08 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:31:01.610 09:13:08 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:31:01.610 09:13:08 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:31:01.610 09:13:08 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:31:01.610 09:13:08 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:31:01.610 09:13:08 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:31:01.610 09:13:08 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:01.610 09:13:08 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:01.869 09:13:08 keyring_linux -- keyring/linux.sh@25 -- # sn=251096795 00:31:01.869 09:13:08 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:31:01.869 09:13:08 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:31:01.869 09:13:08 keyring_linux -- keyring/linux.sh@26 -- # [[ 251096795 == \2\5\1\0\9\6\7\9\5 ]] 00:31:01.870 09:13:08 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 251096795 00:31:01.870 09:13:08 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:31:01.870 09:13:08 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:02.128 Running I/O for 1 seconds... 00:31:03.064 00:31:03.064 Latency(us) 00:31:03.064 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:03.064 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:31:03.064 nvme0n1 : 1.01 8841.08 34.54 0.00 0.00 14365.15 4855.62 20256.58 00:31:03.064 =================================================================================================================== 00:31:03.064 Total : 8841.08 34.54 0.00 0.00 14365.15 4855.62 20256.58 00:31:03.064 0 00:31:03.064 09:13:10 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:31:03.064 09:13:10 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:31:03.322 09:13:10 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:31:03.322 09:13:10 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:31:03.322 09:13:10 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:31:03.322 09:13:10 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:31:03.322 09:13:10 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:03.322 09:13:10 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:31:03.581 09:13:10 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:31:03.581 09:13:10 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:31:03.581 09:13:10 keyring_linux -- keyring/linux.sh@23 -- # return 00:31:03.581 09:13:10 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:31:03.581 09:13:10 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:31:03.581 09:13:10 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:31:03.581 09:13:10 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:31:03.581 09:13:10 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:03.581 09:13:10 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:31:03.581 09:13:10 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:03.581 09:13:10 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:31:03.581 09:13:10 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:31:03.840 [2024-07-25 09:13:10.899928] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:31:03.840 [2024-07-25 09:13:10.900658] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002f880 (107): Transport endpoint is not connected 00:31:03.840 [2024-07-25 09:13:10.901627] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002f880 (9): Bad file descriptor 00:31:03.840 [2024-07-25 09:13:10.902621] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:03.840 [2024-07-25 09:13:10.902678] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:31:03.840 [2024-07-25 09:13:10.902710] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:03.840 request: 00:31:03.840 { 00:31:03.840 "name": "nvme0", 00:31:03.840 "trtype": "tcp", 00:31:03.840 "traddr": "127.0.0.1", 00:31:03.840 "adrfam": "ipv4", 00:31:03.840 "trsvcid": "4420", 00:31:03.840 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:03.840 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:03.840 "prchk_reftag": false, 00:31:03.840 "prchk_guard": false, 00:31:03.840 "hdgst": false, 00:31:03.840 "ddgst": false, 00:31:03.840 "psk": ":spdk-test:key1", 00:31:03.840 "method": "bdev_nvme_attach_controller", 00:31:03.840 "req_id": 1 00:31:03.840 } 00:31:03.840 Got JSON-RPC error response 00:31:03.840 response: 00:31:03.840 { 00:31:03.840 "code": -5, 00:31:03.840 "message": "Input/output error" 00:31:03.840 } 00:31:03.840 09:13:10 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:31:03.840 09:13:10 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:03.840 09:13:10 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:03.840 09:13:10 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:03.840 09:13:10 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:31:03.840 09:13:10 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:31:03.840 09:13:10 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:31:03.840 09:13:10 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:31:03.840 09:13:10 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:31:03.840 09:13:10 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:31:03.840 09:13:10 keyring_linux -- keyring/linux.sh@33 -- # sn=251096795 00:31:03.840 09:13:10 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 251096795 00:31:03.840 1 links removed 00:31:03.840 09:13:10 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:31:03.840 09:13:10 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:31:03.840 09:13:10 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:31:03.840 09:13:10 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:31:03.840 09:13:10 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:31:03.840 09:13:10 keyring_linux -- keyring/linux.sh@33 -- # sn=634849576 00:31:03.840 09:13:10 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 634849576 00:31:03.840 1 links removed 00:31:03.840 09:13:10 keyring_linux -- keyring/linux.sh@41 -- # killprocess 91734 00:31:03.840 09:13:10 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 91734 ']' 00:31:03.840 09:13:10 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 91734 00:31:03.840 09:13:10 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:31:04.100 09:13:10 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:04.100 09:13:10 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91734 00:31:04.100 09:13:10 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:31:04.100 09:13:10 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:31:04.100 killing process with pid 91734 00:31:04.100 09:13:10 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91734' 00:31:04.100 09:13:10 keyring_linux -- common/autotest_common.sh@969 -- # kill 91734 00:31:04.100 Received shutdown signal, test time was about 1.000000 seconds 00:31:04.100 00:31:04.100 Latency(us) 00:31:04.100 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:04.100 =================================================================================================================== 00:31:04.100 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:04.100 09:13:10 keyring_linux -- common/autotest_common.sh@974 -- # wait 91734 00:31:05.035 09:13:12 keyring_linux -- keyring/linux.sh@42 -- # killprocess 91716 00:31:05.035 09:13:12 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 91716 ']' 00:31:05.035 09:13:12 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 91716 00:31:05.035 09:13:12 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:31:05.035 09:13:12 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:05.035 09:13:12 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91716 00:31:05.035 09:13:12 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:05.035 09:13:12 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:05.035 killing process with pid 91716 00:31:05.035 09:13:12 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91716' 00:31:05.035 09:13:12 keyring_linux -- common/autotest_common.sh@969 -- # kill 91716 00:31:05.035 09:13:12 keyring_linux -- common/autotest_common.sh@974 -- # wait 91716 00:31:07.644 00:31:07.644 real 0m9.515s 00:31:07.644 user 0m16.320s 00:31:07.644 sys 0m1.789s 00:31:07.644 09:13:14 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:07.644 ************************************ 00:31:07.644 END TEST keyring_linux 00:31:07.644 09:13:14 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:31:07.644 ************************************ 00:31:07.644 09:13:14 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:31:07.644 09:13:14 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:31:07.644 09:13:14 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:31:07.645 09:13:14 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:31:07.645 09:13:14 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:31:07.645 09:13:14 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:31:07.645 09:13:14 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:31:07.645 09:13:14 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:31:07.645 09:13:14 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:31:07.645 09:13:14 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:31:07.645 09:13:14 -- spdk/autotest.sh@360 -- # '[' 0 -eq 1 ']' 00:31:07.645 09:13:14 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:31:07.645 09:13:14 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:31:07.645 09:13:14 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:31:07.645 09:13:14 -- spdk/autotest.sh@379 -- # [[ 0 -eq 1 ]] 00:31:07.645 09:13:14 -- spdk/autotest.sh@384 -- # trap - SIGINT SIGTERM EXIT 00:31:07.645 09:13:14 -- spdk/autotest.sh@386 -- # timing_enter post_cleanup 00:31:07.645 09:13:14 -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:07.645 09:13:14 -- common/autotest_common.sh@10 -- # set +x 00:31:07.645 09:13:14 -- spdk/autotest.sh@387 -- # autotest_cleanup 00:31:07.645 09:13:14 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:31:07.645 09:13:14 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:31:07.645 09:13:14 -- common/autotest_common.sh@10 -- # set +x 00:31:09.019 INFO: APP EXITING 00:31:09.019 INFO: killing all VMs 00:31:09.019 INFO: killing vhost app 00:31:09.019 INFO: EXIT DONE 00:31:09.585 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:09.585 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:31:09.585 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:31:10.151 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:10.151 Cleaning 00:31:10.151 Removing: /var/run/dpdk/spdk0/config 00:31:10.152 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:31:10.152 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:31:10.152 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:31:10.152 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:31:10.152 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:31:10.152 Removing: /var/run/dpdk/spdk0/hugepage_info 00:31:10.152 Removing: /var/run/dpdk/spdk1/config 00:31:10.152 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:31:10.152 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:31:10.152 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:31:10.152 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:31:10.152 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:31:10.152 Removing: /var/run/dpdk/spdk1/hugepage_info 00:31:10.152 Removing: /var/run/dpdk/spdk2/config 00:31:10.152 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:31:10.410 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:31:10.410 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:31:10.410 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:31:10.410 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:31:10.410 Removing: /var/run/dpdk/spdk2/hugepage_info 00:31:10.410 Removing: /var/run/dpdk/spdk3/config 00:31:10.410 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:31:10.410 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:31:10.410 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:31:10.410 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:31:10.410 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:31:10.410 Removing: /var/run/dpdk/spdk3/hugepage_info 00:31:10.410 Removing: /var/run/dpdk/spdk4/config 00:31:10.410 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:31:10.410 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:31:10.410 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:31:10.410 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:31:10.410 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:31:10.410 Removing: /var/run/dpdk/spdk4/hugepage_info 00:31:10.410 Removing: /dev/shm/nvmf_trace.0 00:31:10.410 Removing: /dev/shm/spdk_tgt_trace.pid59550 00:31:10.410 Removing: /var/run/dpdk/spdk0 00:31:10.410 Removing: /var/run/dpdk/spdk1 00:31:10.410 Removing: /var/run/dpdk/spdk2 00:31:10.410 Removing: /var/run/dpdk/spdk3 00:31:10.410 Removing: /var/run/dpdk/spdk4 00:31:10.410 Removing: /var/run/dpdk/spdk_pid59334 00:31:10.410 Removing: /var/run/dpdk/spdk_pid59550 00:31:10.410 Removing: /var/run/dpdk/spdk_pid59771 00:31:10.410 Removing: /var/run/dpdk/spdk_pid59875 00:31:10.410 Removing: /var/run/dpdk/spdk_pid59930 00:31:10.410 Removing: /var/run/dpdk/spdk_pid60059 00:31:10.410 Removing: /var/run/dpdk/spdk_pid60083 00:31:10.410 Removing: /var/run/dpdk/spdk_pid60231 00:31:10.410 Removing: /var/run/dpdk/spdk_pid60440 00:31:10.410 Removing: /var/run/dpdk/spdk_pid60598 00:31:10.410 Removing: /var/run/dpdk/spdk_pid60707 00:31:10.410 Removing: /var/run/dpdk/spdk_pid60806 00:31:10.410 Removing: /var/run/dpdk/spdk_pid60920 00:31:10.410 Removing: /var/run/dpdk/spdk_pid61020 00:31:10.410 Removing: /var/run/dpdk/spdk_pid61064 00:31:10.410 Removing: /var/run/dpdk/spdk_pid61096 00:31:10.410 Removing: /var/run/dpdk/spdk_pid61164 00:31:10.410 Removing: /var/run/dpdk/spdk_pid61254 00:31:10.410 Removing: /var/run/dpdk/spdk_pid61717 00:31:10.410 Removing: /var/run/dpdk/spdk_pid61792 00:31:10.410 Removing: /var/run/dpdk/spdk_pid61868 00:31:10.410 Removing: /var/run/dpdk/spdk_pid61884 00:31:10.410 Removing: /var/run/dpdk/spdk_pid62039 00:31:10.410 Removing: /var/run/dpdk/spdk_pid62056 00:31:10.410 Removing: /var/run/dpdk/spdk_pid62215 00:31:10.410 Removing: /var/run/dpdk/spdk_pid62231 00:31:10.410 Removing: /var/run/dpdk/spdk_pid62295 00:31:10.410 Removing: /var/run/dpdk/spdk_pid62319 00:31:10.410 Removing: /var/run/dpdk/spdk_pid62383 00:31:10.410 Removing: /var/run/dpdk/spdk_pid62406 00:31:10.410 Removing: /var/run/dpdk/spdk_pid62593 00:31:10.410 Removing: /var/run/dpdk/spdk_pid62630 00:31:10.410 Removing: /var/run/dpdk/spdk_pid62711 00:31:10.410 Removing: /var/run/dpdk/spdk_pid63047 00:31:10.410 Removing: /var/run/dpdk/spdk_pid63060 00:31:10.410 Removing: /var/run/dpdk/spdk_pid63114 00:31:10.410 Removing: /var/run/dpdk/spdk_pid63145 00:31:10.410 Removing: /var/run/dpdk/spdk_pid63178 00:31:10.410 Removing: /var/run/dpdk/spdk_pid63215 00:31:10.410 Removing: /var/run/dpdk/spdk_pid63246 00:31:10.410 Removing: /var/run/dpdk/spdk_pid63279 00:31:10.410 Removing: /var/run/dpdk/spdk_pid63314 00:31:10.410 Removing: /var/run/dpdk/spdk_pid63346 00:31:10.410 Removing: /var/run/dpdk/spdk_pid63374 00:31:10.410 Removing: /var/run/dpdk/spdk_pid63416 00:31:10.410 Removing: /var/run/dpdk/spdk_pid63448 00:31:10.410 Removing: /var/run/dpdk/spdk_pid63481 00:31:10.410 Removing: /var/run/dpdk/spdk_pid63512 00:31:10.410 Removing: /var/run/dpdk/spdk_pid63544 00:31:10.410 Removing: /var/run/dpdk/spdk_pid63576 00:31:10.410 Removing: /var/run/dpdk/spdk_pid63614 00:31:10.410 Removing: /var/run/dpdk/spdk_pid63644 00:31:10.410 Removing: /var/run/dpdk/spdk_pid63677 00:31:10.410 Removing: /var/run/dpdk/spdk_pid63725 00:31:10.410 Removing: /var/run/dpdk/spdk_pid63756 00:31:10.410 Removing: /var/run/dpdk/spdk_pid63803 00:31:10.410 Removing: /var/run/dpdk/spdk_pid63879 00:31:10.410 Removing: /var/run/dpdk/spdk_pid63925 00:31:10.410 Removing: /var/run/dpdk/spdk_pid63952 00:31:10.410 Removing: /var/run/dpdk/spdk_pid63998 00:31:10.668 Removing: /var/run/dpdk/spdk_pid64025 00:31:10.668 Removing: /var/run/dpdk/spdk_pid64050 00:31:10.668 Removing: /var/run/dpdk/spdk_pid64110 00:31:10.668 Removing: /var/run/dpdk/spdk_pid64147 00:31:10.668 Removing: /var/run/dpdk/spdk_pid64193 00:31:10.668 Removing: /var/run/dpdk/spdk_pid64220 00:31:10.668 Removing: /var/run/dpdk/spdk_pid64247 00:31:10.668 Removing: /var/run/dpdk/spdk_pid64274 00:31:10.668 Removing: /var/run/dpdk/spdk_pid64296 00:31:10.668 Removing: /var/run/dpdk/spdk_pid64323 00:31:10.668 Removing: /var/run/dpdk/spdk_pid64350 00:31:10.668 Removing: /var/run/dpdk/spdk_pid64378 00:31:10.668 Removing: /var/run/dpdk/spdk_pid64424 00:31:10.668 Removing: /var/run/dpdk/spdk_pid64468 00:31:10.668 Removing: /var/run/dpdk/spdk_pid64495 00:31:10.668 Removing: /var/run/dpdk/spdk_pid64541 00:31:10.668 Removing: /var/run/dpdk/spdk_pid64568 00:31:10.668 Removing: /var/run/dpdk/spdk_pid64593 00:31:10.668 Removing: /var/run/dpdk/spdk_pid64651 00:31:10.668 Removing: /var/run/dpdk/spdk_pid64680 00:31:10.668 Removing: /var/run/dpdk/spdk_pid64724 00:31:10.668 Removing: /var/run/dpdk/spdk_pid64749 00:31:10.668 Removing: /var/run/dpdk/spdk_pid64773 00:31:10.668 Removing: /var/run/dpdk/spdk_pid64794 00:31:10.668 Removing: /var/run/dpdk/spdk_pid64819 00:31:10.668 Removing: /var/run/dpdk/spdk_pid64844 00:31:10.668 Removing: /var/run/dpdk/spdk_pid64869 00:31:10.668 Removing: /var/run/dpdk/spdk_pid64894 00:31:10.668 Removing: /var/run/dpdk/spdk_pid64980 00:31:10.668 Removing: /var/run/dpdk/spdk_pid65084 00:31:10.668 Removing: /var/run/dpdk/spdk_pid65250 00:31:10.668 Removing: /var/run/dpdk/spdk_pid65301 00:31:10.668 Removing: /var/run/dpdk/spdk_pid65364 00:31:10.668 Removing: /var/run/dpdk/spdk_pid65396 00:31:10.668 Removing: /var/run/dpdk/spdk_pid65430 00:31:10.668 Removing: /var/run/dpdk/spdk_pid65461 00:31:10.668 Removing: /var/run/dpdk/spdk_pid65510 00:31:10.668 Removing: /var/run/dpdk/spdk_pid65539 00:31:10.668 Removing: /var/run/dpdk/spdk_pid65621 00:31:10.668 Removing: /var/run/dpdk/spdk_pid65671 00:31:10.668 Removing: /var/run/dpdk/spdk_pid65755 00:31:10.668 Removing: /var/run/dpdk/spdk_pid65885 00:31:10.668 Removing: /var/run/dpdk/spdk_pid65986 00:31:10.668 Removing: /var/run/dpdk/spdk_pid66044 00:31:10.668 Removing: /var/run/dpdk/spdk_pid66164 00:31:10.668 Removing: /var/run/dpdk/spdk_pid66230 00:31:10.668 Removing: /var/run/dpdk/spdk_pid66281 00:31:10.668 Removing: /var/run/dpdk/spdk_pid66528 00:31:10.668 Removing: /var/run/dpdk/spdk_pid66645 00:31:10.668 Removing: /var/run/dpdk/spdk_pid66693 00:31:10.668 Removing: /var/run/dpdk/spdk_pid67042 00:31:10.668 Removing: /var/run/dpdk/spdk_pid67084 00:31:10.668 Removing: /var/run/dpdk/spdk_pid67407 00:31:10.668 Removing: /var/run/dpdk/spdk_pid67839 00:31:10.668 Removing: /var/run/dpdk/spdk_pid68127 00:31:10.668 Removing: /var/run/dpdk/spdk_pid68957 00:31:10.668 Removing: /var/run/dpdk/spdk_pid69814 00:31:10.668 Removing: /var/run/dpdk/spdk_pid69948 00:31:10.668 Removing: /var/run/dpdk/spdk_pid70028 00:31:10.668 Removing: /var/run/dpdk/spdk_pid71340 00:31:10.668 Removing: /var/run/dpdk/spdk_pid71643 00:31:10.668 Removing: /var/run/dpdk/spdk_pid75052 00:31:10.668 Removing: /var/run/dpdk/spdk_pid75400 00:31:10.668 Removing: /var/run/dpdk/spdk_pid75515 00:31:10.668 Removing: /var/run/dpdk/spdk_pid75654 00:31:10.668 Removing: /var/run/dpdk/spdk_pid75688 00:31:10.668 Removing: /var/run/dpdk/spdk_pid75729 00:31:10.668 Removing: /var/run/dpdk/spdk_pid75763 00:31:10.668 Removing: /var/run/dpdk/spdk_pid75880 00:31:10.668 Removing: /var/run/dpdk/spdk_pid76021 00:31:10.668 Removing: /var/run/dpdk/spdk_pid76219 00:31:10.668 Removing: /var/run/dpdk/spdk_pid76315 00:31:10.668 Removing: /var/run/dpdk/spdk_pid76533 00:31:10.668 Removing: /var/run/dpdk/spdk_pid76651 00:31:10.668 Removing: /var/run/dpdk/spdk_pid76770 00:31:10.668 Removing: /var/run/dpdk/spdk_pid77102 00:31:10.668 Removing: /var/run/dpdk/spdk_pid77485 00:31:10.668 Removing: /var/run/dpdk/spdk_pid77499 00:31:10.668 Removing: /var/run/dpdk/spdk_pid79762 00:31:10.668 Removing: /var/run/dpdk/spdk_pid79774 00:31:10.668 Removing: /var/run/dpdk/spdk_pid80063 00:31:10.668 Removing: /var/run/dpdk/spdk_pid80079 00:31:10.668 Removing: /var/run/dpdk/spdk_pid80100 00:31:10.668 Removing: /var/run/dpdk/spdk_pid80132 00:31:10.668 Removing: /var/run/dpdk/spdk_pid80142 00:31:10.668 Removing: /var/run/dpdk/spdk_pid80228 00:31:10.669 Removing: /var/run/dpdk/spdk_pid80236 00:31:10.669 Removing: /var/run/dpdk/spdk_pid80347 00:31:10.669 Removing: /var/run/dpdk/spdk_pid80350 00:31:10.926 Removing: /var/run/dpdk/spdk_pid80461 00:31:10.926 Removing: /var/run/dpdk/spdk_pid80464 00:31:10.926 Removing: /var/run/dpdk/spdk_pid80869 00:31:10.927 Removing: /var/run/dpdk/spdk_pid80905 00:31:10.927 Removing: /var/run/dpdk/spdk_pid81007 00:31:10.927 Removing: /var/run/dpdk/spdk_pid81084 00:31:10.927 Removing: /var/run/dpdk/spdk_pid81404 00:31:10.927 Removing: /var/run/dpdk/spdk_pid81607 00:31:10.927 Removing: /var/run/dpdk/spdk_pid82003 00:31:10.927 Removing: /var/run/dpdk/spdk_pid82516 00:31:10.927 Removing: /var/run/dpdk/spdk_pid83335 00:31:10.927 Removing: /var/run/dpdk/spdk_pid83945 00:31:10.927 Removing: /var/run/dpdk/spdk_pid83948 00:31:10.927 Removing: /var/run/dpdk/spdk_pid85868 00:31:10.927 Removing: /var/run/dpdk/spdk_pid85942 00:31:10.927 Removing: /var/run/dpdk/spdk_pid86013 00:31:10.927 Removing: /var/run/dpdk/spdk_pid86086 00:31:10.927 Removing: /var/run/dpdk/spdk_pid86226 00:31:10.927 Removing: /var/run/dpdk/spdk_pid86297 00:31:10.927 Removing: /var/run/dpdk/spdk_pid86364 00:31:10.927 Removing: /var/run/dpdk/spdk_pid86431 00:31:10.927 Removing: /var/run/dpdk/spdk_pid86764 00:31:10.927 Removing: /var/run/dpdk/spdk_pid87931 00:31:10.927 Removing: /var/run/dpdk/spdk_pid88080 00:31:10.927 Removing: /var/run/dpdk/spdk_pid88326 00:31:10.927 Removing: /var/run/dpdk/spdk_pid88886 00:31:10.927 Removing: /var/run/dpdk/spdk_pid89045 00:31:10.927 Removing: /var/run/dpdk/spdk_pid89215 00:31:10.927 Removing: /var/run/dpdk/spdk_pid89311 00:31:10.927 Removing: /var/run/dpdk/spdk_pid89479 00:31:10.927 Removing: /var/run/dpdk/spdk_pid89592 00:31:10.927 Removing: /var/run/dpdk/spdk_pid90268 00:31:10.927 Removing: /var/run/dpdk/spdk_pid90305 00:31:10.927 Removing: /var/run/dpdk/spdk_pid90341 00:31:10.927 Removing: /var/run/dpdk/spdk_pid90803 00:31:10.927 Removing: /var/run/dpdk/spdk_pid90839 00:31:10.927 Removing: /var/run/dpdk/spdk_pid90870 00:31:10.927 Removing: /var/run/dpdk/spdk_pid91295 00:31:10.927 Removing: /var/run/dpdk/spdk_pid91312 00:31:10.927 Removing: /var/run/dpdk/spdk_pid91567 00:31:10.927 Removing: /var/run/dpdk/spdk_pid91716 00:31:10.927 Removing: /var/run/dpdk/spdk_pid91734 00:31:10.927 Clean 00:31:10.927 09:13:17 -- common/autotest_common.sh@1451 -- # return 0 00:31:10.927 09:13:17 -- spdk/autotest.sh@388 -- # timing_exit post_cleanup 00:31:10.927 09:13:17 -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:10.927 09:13:17 -- common/autotest_common.sh@10 -- # set +x 00:31:10.927 09:13:17 -- spdk/autotest.sh@390 -- # timing_exit autotest 00:31:10.927 09:13:17 -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:10.927 09:13:17 -- common/autotest_common.sh@10 -- # set +x 00:31:10.927 09:13:18 -- spdk/autotest.sh@391 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:31:10.927 09:13:18 -- spdk/autotest.sh@393 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:31:10.927 09:13:18 -- spdk/autotest.sh@393 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:31:10.927 09:13:18 -- spdk/autotest.sh@395 -- # hash lcov 00:31:10.927 09:13:18 -- spdk/autotest.sh@395 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:31:11.184 09:13:18 -- spdk/autotest.sh@397 -- # hostname 00:31:11.184 09:13:18 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:31:11.184 geninfo: WARNING: invalid characters removed from testname! 00:31:37.813 09:13:42 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:40.344 09:13:46 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:42.902 09:13:49 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:45.437 09:13:52 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:47.972 09:13:55 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:51.256 09:13:57 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:53.789 09:14:00 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:31:53.789 09:14:00 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:53.789 09:14:00 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:31:53.789 09:14:00 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:53.789 09:14:00 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:53.789 09:14:00 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.789 09:14:00 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.789 09:14:00 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.789 09:14:00 -- paths/export.sh@5 -- $ export PATH 00:31:53.789 09:14:00 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.789 09:14:00 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:31:53.789 09:14:00 -- common/autobuild_common.sh@447 -- $ date +%s 00:31:53.789 09:14:00 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721898840.XXXXXX 00:31:53.789 09:14:00 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721898840.nvBoVm 00:31:53.789 09:14:00 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:31:53.789 09:14:00 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:31:53.789 09:14:00 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:31:53.789 09:14:00 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:31:53.789 09:14:00 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:31:53.789 09:14:00 -- common/autobuild_common.sh@463 -- $ get_config_params 00:31:53.789 09:14:00 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:31:53.789 09:14:00 -- common/autotest_common.sh@10 -- $ set +x 00:31:53.789 09:14:00 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-vfio-user --with-uring' 00:31:53.789 09:14:00 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:31:53.789 09:14:00 -- pm/common@17 -- $ local monitor 00:31:53.789 09:14:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:53.789 09:14:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:53.789 09:14:00 -- pm/common@25 -- $ sleep 1 00:31:53.790 09:14:00 -- pm/common@21 -- $ date +%s 00:31:53.790 09:14:00 -- pm/common@21 -- $ date +%s 00:31:53.790 09:14:00 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721898840 00:31:53.790 09:14:00 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721898840 00:31:53.790 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721898840_collect-vmstat.pm.log 00:31:53.790 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721898840_collect-cpu-load.pm.log 00:31:54.722 09:14:01 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:31:54.722 09:14:01 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:31:54.722 09:14:01 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:31:54.722 09:14:01 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:31:54.722 09:14:01 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:31:54.722 09:14:01 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:31:54.722 09:14:01 -- spdk/autopackage.sh@19 -- $ timing_finish 00:31:54.722 09:14:01 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:31:54.722 09:14:01 -- common/autotest_common.sh@737 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:31:54.722 09:14:01 -- common/autotest_common.sh@739 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:31:54.722 09:14:01 -- spdk/autopackage.sh@20 -- $ exit 0 00:31:54.722 09:14:01 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:31:54.722 09:14:01 -- pm/common@29 -- $ signal_monitor_resources TERM 00:31:54.722 09:14:01 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:31:54.722 09:14:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:54.722 09:14:01 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:31:54.722 09:14:01 -- pm/common@44 -- $ pid=93478 00:31:54.722 09:14:01 -- pm/common@50 -- $ kill -TERM 93478 00:31:54.722 09:14:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:54.722 09:14:01 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:31:54.722 09:14:01 -- pm/common@44 -- $ pid=93479 00:31:54.722 09:14:01 -- pm/common@50 -- $ kill -TERM 93479 00:31:54.722 + [[ -n 5111 ]] 00:31:54.722 + sudo kill 5111 00:31:54.731 [Pipeline] } 00:31:54.749 [Pipeline] // timeout 00:31:54.755 [Pipeline] } 00:31:54.769 [Pipeline] // stage 00:31:54.775 [Pipeline] } 00:31:54.794 [Pipeline] // catchError 00:31:54.804 [Pipeline] stage 00:31:54.806 [Pipeline] { (Stop VM) 00:31:54.819 [Pipeline] sh 00:31:55.097 + vagrant halt 00:31:59.280 ==> default: Halting domain... 00:32:04.591 [Pipeline] sh 00:32:04.894 + vagrant destroy -f 00:32:09.083 ==> default: Removing domain... 00:32:09.094 [Pipeline] sh 00:32:09.373 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:32:09.382 [Pipeline] } 00:32:09.401 [Pipeline] // stage 00:32:09.406 [Pipeline] } 00:32:09.424 [Pipeline] // dir 00:32:09.430 [Pipeline] } 00:32:09.446 [Pipeline] // wrap 00:32:09.452 [Pipeline] } 00:32:09.467 [Pipeline] // catchError 00:32:09.476 [Pipeline] stage 00:32:09.478 [Pipeline] { (Epilogue) 00:32:09.491 [Pipeline] sh 00:32:09.771 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:32:16.343 [Pipeline] catchError 00:32:16.345 [Pipeline] { 00:32:16.359 [Pipeline] sh 00:32:16.657 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:32:16.657 Artifacts sizes are good 00:32:16.666 [Pipeline] } 00:32:16.683 [Pipeline] // catchError 00:32:16.695 [Pipeline] archiveArtifacts 00:32:16.702 Archiving artifacts 00:32:16.888 [Pipeline] cleanWs 00:32:16.899 [WS-CLEANUP] Deleting project workspace... 00:32:16.899 [WS-CLEANUP] Deferred wipeout is used... 00:32:16.906 [WS-CLEANUP] done 00:32:16.908 [Pipeline] } 00:32:16.927 [Pipeline] // stage 00:32:16.932 [Pipeline] } 00:32:16.949 [Pipeline] // node 00:32:16.955 [Pipeline] End of Pipeline 00:32:16.992 Finished: SUCCESS